Introduction

Subimited by: Susan Bataju

For the Lab 3, Stellar Classification Dataset was choosen [1][2]. The light for far away objects are red-shifted because the universe is expanding and the wavelenght of light is streched. Quasar are mainly found in the center of galaxy and are the most energetic stallar objects, they are also more common in early universe. The objects which were formed in distance past is more redshifed than objects which are relatively young, so Quasar are vital for study of early universe.

Content

The data consists of 100,000 observations of space taken by the SDSS (Sloan Digital Sky Survey). Every observation is described by 17 feature columns and 1 class column which identifies it to be either a star, galaxy or quasar. Following features were present is the dataset:

obj_ID = Object Identifier, the unique value that identifies the object in the image catalog used by the CAS alpha = Right Ascension angle (at J2000 epoch) delta = Declination angle (at J2000 epoch) u = Ultraviolet filter in the photometric system g = Green filter in the photometric system r = Red filter in the photometric system i = Near Infrared filter in the photometric system z = Infrared filter in the photometric system run_ID = Run Number used to identify the specific scan rereun_ID = Rerun Number to specify how the image was processed cam_col = Camera column to identify the scanline within the run field_ID = Field number to identify each field spec_obj_ID = Unique ID used for optical spectroscopic objects (this means that 2 different observations with the same spec_obj_ID must share the output class) class = object class (galaxy, star or quasar object) redshift = redshift value based on the increase in wavelength plate = plate ID, identifies each plate in SDSS MJD = Modified Julian Date, used to indicate when a given piece of SDSS data was taken fiber_ID = fiber ID that identifies the fiber that pointed the light at the focal plane in each observation

Usages

The use of classification technique stude in this report is most useful for astrophysicist studing early universe and the classification task is to classify object in three catagories Galaxies,Stars or Quasar. The model would be use in offline analysis to seperate Galaxies,Stars or Quasar as we study the data gathered throght January 2021 (latest).

Acknowledgment

The idea for PCA was taken from https://www.kaggle.com/code/wessamwalid/stellar-classification-sdss17-4-ml-models#Missing-Value-Analysis.

In [1]:
import sklearn
from sklearn.datasets import load_iris
import numpy as np
from sklearn.metrics import accuracy_score
from scipy.special import expit
import os
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import OneHotEncoder ,LabelEncoder
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
import copy
from copy import deepcopy
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split
import optuna
import xgboost as xgb
from sklearn.metrics import accuracy_score,f1_score,roc_auc_score,confusion_matrix,roc_curve,auc
from optuna.visualization import plot_contour
from optuna.visualization import plot_edf
from optuna.visualization import plot_intermediate_values
from optuna.visualization import plot_optimization_history
from optuna.visualization import plot_parallel_coordinate
from optuna.visualization import plot_param_importances
from optuna.visualization import plot_slice
import plotly.io as pio
pio.renderers.default = 'iframe' # or 'notebook' or 'colab' or 'jupyterlab'
# ds = load_iris()
# X = ds.data
# y = (ds.target>1).astype(int) # make problem binary
In [2]:
def plot_sigbkg(class_,name,y_train,ytrainhat,y_train1,yvalhat,y_test1,yhat):   
#     plt.figure(figsize=(6, 4))
    plt.hist(yhat[:, class_][y_test1==class_],label='test sig',density=True,alpha=0.5)
    plt.hist(yhat[:, class_][y_test1!=class_],label='test bkg',density=True,alpha=0.5)
    plt.hist(ytrainhat[:, class_][y_train==class_],label='train sig',density=True,histtype='step')
    plt.hist(ytrainhat[:, class_][y_train!=class_],label='train bkg',density=True,histtype='step')

    counts,bin_edges = np.histogram(yvalhat[:, class_][y_train1==class_],density=True)
    bin_centers = (bin_edges[:-1] + bin_edges[1:])/2.
    plt.plot(bin_centers, counts,marker="o",linestyle="None",label="val sig")

    counts,bin_edges = np.histogram(yvalhat[:, class_][y_train1!=class_],density=True)
    bin_centers = (bin_edges[:-1] + bin_edges[1:])/2.
    plt.plot(bin_centers, counts,marker="*",linestyle="None",label="val bkg")
    plt.title(name)
    plt.legend()
    plt.tight_layout()
#     plt.show()

    
def plot_roc(class_,name,y_train,ytrainhat,y_train1,yvalhat,y_test1,yhat):
#     plt.figure(figsize=(6,4))
    roc1 = roc_curve(1*(y_test1==class_) , yhat[:, class_])
    fpr,tpr,_=roc1
    plt.plot(fpr, tpr, 'b',label=f'test (area = {auc(fpr,tpr)*100:.1f})%')
    roc1 = roc_curve(1*(y_train==class_) , ytrainhat[:, class_])
    fpr,tpr,_=roc1
    plt.plot(fpr, tpr, 'r',label=f'train (area = {auc(fpr,tpr)*100:.1f})%')

    roc1 = roc_curve(1*(y_train1==class_) , yvalhat[:, class_])
    fpr,tpr,_=roc1
    plt.plot(fpr, tpr, 'g',label=f'val (area = {auc(fpr,tpr)*100:.1f})%')

    plt.legend()
    plt.title(name)
#     plt.show()
In [3]:
# https://www.kaggle.com/datasets/fedesoriano/stellar-classification-dataset-sdss17 

# https://www.kaggle.com/code/wessamwalid/stellar-classification-sdss17-4-ml-models#Missing-Value-Analysis

# load the data
df = pd.read_csv("star_classification.csv")
In [4]:
df.head()
Out[4]:
obj_ID alpha delta u g r i z run_ID rerun_ID cam_col field_ID spec_obj_ID class redshift plate MJD fiber_ID
0 1.237661e+18 135.689107 32.494632 23.87882 22.27530 20.39501 19.16573 18.79371 3606 301 2 79 6.543777e+18 GALAXY 0.634794 5812 56354 171
1 1.237665e+18 144.826101 31.274185 24.77759 22.83188 22.58444 21.16812 21.61427 4518 301 5 119 1.176014e+19 GALAXY 0.779136 10445 58158 427
2 1.237661e+18 142.188790 35.582444 25.26307 22.66389 20.60976 19.34857 18.94827 3606 301 2 120 5.152200e+18 GALAXY 0.644195 4576 55592 299
3 1.237663e+18 338.741038 -0.402828 22.13682 23.77656 21.61162 20.50454 19.25010 4192 301 3 214 1.030107e+19 GALAXY 0.932346 9149 58039 775
4 1.237680e+18 345.282593 21.183866 19.43718 17.58028 16.49747 15.97711 15.54461 8102 301 3 137 6.891865e+18 GALAXY 0.116123 6121 56187 842
In [5]:
df['class'].unique() # three classes
Out[5]:
array(['GALAXY', 'QSO', 'STAR'], dtype=object)
In [6]:
le = LabelEncoder() # hot hot encoded 
le.fit(df['class'].unique())
target = le.transform(df['class'])
df['target'] = target
df.drop(['class'],inplace=True,axis=1)

The classes are encoded as such and the total instances of each class is shown below.

In [7]:
print(len(target[target==0]), '--> 0 GALAXY')
print(len(target[target==1]), '--> 1 QSO')
print(len(target[target==2]), '--> 2 STAR')
59445 --> 0 GALAXY
18961 --> 1 QSO
21594 --> 2 STAR
In [8]:
ig, ax = plt.subplots()  
bars=plt.bar(le.classes_, [len(target[target==i]) for i in range(len(le.classes_)) ])
for index,data in enumerate([len(target[target==i]) for i in range(len(le.classes_)) ]):
    plt.text(x=index,y=10000,s=f'{data/len(target)*100:.0f}%',c='white',va='center', fontweight='bold')
plt.title("Class")
plt.show()

The data is split into 60%-20%-20%. To deal with imbalanced classes,

  • I will use sample weight on the loss function i.e more weight is give to miniority class. The sample weight of the classes is shown below. I have to multiply sample weight with loss function.
  • I can randomly sample equal number of instances.
In [9]:
from sklearn.utils.class_weight import compute_class_weight 
# https://scikit-learn.org/stable/modules/generated/sklearn.utils.class_weight.compute_class_weight.html 
compute_class_weight('balanced',np.unique(target), target )
# If ‘balanced’, class weights will be given by n_samples / (n_classes * np.bincount(y))
Out[9]:
array([0.56074242, 1.75799448, 1.54363867])
In [10]:
# also can be calculated like so. 
print([len(target)/(3*i) for i in np.bincount(target) ])
[0.5607424229680097, 1.757994479897333, 1.5436386650612826]
In [11]:
df.describe()
Out[11]:
obj_ID alpha delta u g r i z run_ID rerun_ID cam_col field_ID spec_obj_ID redshift plate MJD fiber_ID target
count 1.000000e+05 100000.000000 100000.000000 100000.000000 100000.000000 100000.000000 100000.000000 100000.000000 100000.000000 100000.0 100000.000000 100000.000000 1.000000e+05 100000.000000 100000.000000 100000.000000 100000.000000 100000.000000
mean 1.237665e+18 177.629117 24.135305 21.980468 20.531387 19.645762 19.084854 18.668810 4481.366060 301.0 3.511610 186.130520 5.783882e+18 0.576661 5137.009660 55588.647500 449.312740 0.621490
std 8.438560e+12 96.502241 19.644665 31.769291 31.750292 1.854760 1.757895 31.728152 1964.764593 0.0 1.586912 149.011073 3.324016e+18 0.730707 2952.303351 1808.484233 272.498404 0.816778
min 1.237646e+18 0.005528 -18.785328 -9999.000000 -9999.000000 9.822070 9.469903 -9999.000000 109.000000 301.0 1.000000 11.000000 2.995191e+17 -0.009971 266.000000 51608.000000 1.000000 0.000000
25% 1.237659e+18 127.518222 5.146771 20.352353 18.965230 18.135828 17.732285 17.460677 3187.000000 301.0 2.000000 82.000000 2.844138e+18 0.054517 2526.000000 54234.000000 221.000000 0.000000
50% 1.237663e+18 180.900700 23.645922 22.179135 21.099835 20.125290 19.405145 19.004595 4188.000000 301.0 4.000000 146.000000 5.614883e+18 0.424173 4987.000000 55868.500000 433.000000 0.000000
75% 1.237668e+18 233.895005 39.901550 23.687440 22.123767 21.044785 20.396495 19.921120 5326.000000 301.0 5.000000 241.000000 8.332144e+18 0.704154 7400.250000 56777.000000 645.000000 1.000000
max 1.237681e+18 359.999810 83.000519 32.781390 31.602240 29.571860 32.141470 29.383740 8162.000000 301.0 6.000000 989.000000 1.412694e+19 7.011245 12547.000000 58932.000000 1000.000000 2.000000
In [12]:
print(df.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 100000 entries, 0 to 99999
Data columns (total 18 columns):
 #   Column       Non-Null Count   Dtype  
---  ------       --------------   -----  
 0   obj_ID       100000 non-null  float64
 1   alpha        100000 non-null  float64
 2   delta        100000 non-null  float64
 3   u            100000 non-null  float64
 4   g            100000 non-null  float64
 5   r            100000 non-null  float64
 6   i            100000 non-null  float64
 7   z            100000 non-null  float64
 8   run_ID       100000 non-null  int64  
 9   rerun_ID     100000 non-null  int64  
 10  cam_col      100000 non-null  int64  
 11  field_ID     100000 non-null  int64  
 12  spec_obj_ID  100000 non-null  float64
 13  redshift     100000 non-null  float64
 14  plate        100000 non-null  int64  
 15  MJD          100000 non-null  int64  
 16  fiber_ID     100000 non-null  int64  
 17  target       100000 non-null  int64  
dtypes: float64(10), int64(8)
memory usage: 13.7 MB
None
In [13]:
# missing value
df.isnull().sum()
Out[13]:
obj_ID         0
alpha          0
delta          0
u              0
g              0
r              0
i              0
z              0
run_ID         0
rerun_ID       0
cam_col        0
field_ID       0
spec_obj_ID    0
redshift       0
plate          0
MJD            0
fiber_ID       0
target         0
dtype: int64

There are no missing values and all the features are numerical. I have plotted and histogram and boxplot for each feature below.

In [13]:
for var in df:
    # print(var)
    plt.subplot(1,2,1)
    plt.hist(df[var])
    plt.title(var)
    plt.subplot(1,2,2)
    plt.boxplot(df[var])
    plt.show()
In [150]:
sns.pairplot(df, diag_kind = "kde")
plt.show()
In [151]:
plt.figure(figsize = (14,10))
sns.heatmap(df.corr(), annot = True, fmt = ".1f", linewidths = .7)
plt.show()

Let's also drop name ID of objects, those are obj_ID and run_ID. Also drop field_ID,cam_col, fiber_ID those seem to be varaible related to the telescope.

In [14]:
df = df.drop(['obj_ID','run_ID','cam_col','rerun_ID','field_ID','fiber_ID'], axis = 1)
df_o = deepcopy(df)
df.head()
Out[14]:
alpha delta u g r i z spec_obj_ID redshift plate MJD target
0 135.689107 32.494632 23.87882 22.27530 20.39501 19.16573 18.79371 6.543777e+18 0.634794 5812 56354 0
1 144.826101 31.274185 24.77759 22.83188 22.58444 21.16812 21.61427 1.176014e+19 0.779136 10445 58158 0
2 142.188790 35.582444 25.26307 22.66389 20.60976 19.34857 18.94827 5.152200e+18 0.644195 4576 55592 0
3 338.741038 -0.402828 22.13682 23.77656 21.61162 20.50454 19.25010 1.030107e+19 0.932346 9149 58039 0
4 345.282593 21.183866 19.43718 17.58028 16.49747 15.97711 15.54461 6.891865e+18 0.116123 6121 56187 0

Let's now standarize the dataset for PCA.

In [15]:
y= df.pop('target')
X=deepcopy(df)
In [16]:
var_used = X.columns
In [17]:
scaler = StandardScaler()
X = pd.DataFrame(scaler.fit_transform(X),columns=var_used)
In [18]:
len(var_used)
Out[18]:
11
In [19]:
pca = PCA()
principalComponents = pca.fit_transform(X)
principalDf =  pd.DataFrame(data = principalComponents
            , columns = [f'principal component {i}' for i in range(len(var_used))])
finalDf = pd.concat([principalDf, pd.Series(y,name='target')], axis = 1)
In [20]:
finalDf
Out[20]:
principal component 0 principal component 1 principal component 2 principal component 3 principal component 4 principal component 5 principal component 6 principal component 7 principal component 8 principal component 9 principal component 10 target
0 -0.619063 0.004784 -0.030292 -0.490020 0.311107 0.128349 -0.273645 -0.139095 0.000424 -0.005937 -0.000019 0
1 -3.447277 0.263711 -0.045702 -0.670421 -0.235934 0.356622 0.134340 -0.369042 -0.022528 0.043494 -0.000006 0
2 -0.167216 -0.094075 -0.025058 -0.169046 0.686824 0.481135 -0.262042 -0.156467 0.026829 -0.000202 -0.000011 0
3 -2.635670 0.246728 -0.218497 1.325810 -1.692375 -0.431588 -0.058032 -0.169952 -0.040199 -0.053606 0.000013 0
4 1.163919 0.066820 -1.727918 -0.089419 -1.793918 -1.491959 -0.053333 -0.036234 0.002802 -0.003445 0.000024 0
... ... ... ... ... ... ... ... ... ... ... ... ...
99995 -2.552195 0.207723 1.814352 -1.095554 -1.265123 0.676150 0.213590 0.009976 -0.036584 -0.020457 -0.000004 0
99996 -1.343849 0.100889 1.009397 -1.384394 -0.114236 0.084724 0.029627 -0.033546 -0.006458 -0.022742 0.000022 0
99997 1.788011 -0.147933 -0.108664 0.333864 -0.511920 -0.089935 -0.148893 0.084917 0.000517 -0.000332 -0.000020 0
99998 -0.743860 0.018652 -1.183513 -0.526289 0.099825 0.156455 0.023414 -0.131684 0.066201 0.010183 -0.000001 0
99999 -1.522969 0.141949 -1.105216 -0.593502 0.250581 0.395040 -0.040802 0.050818 0.000368 -0.012890 0.000020 0

100000 rows × 12 columns

In [21]:
pca.explained_variance_ratio_
Out[21]:
array([4.03715327e-01, 2.70745244e-01, 1.06219063e-01, 8.26355171e-02,
       7.37965200e-02, 5.60141481e-02, 3.66101789e-03, 3.09344415e-03,
       9.81219291e-05, 2.15968995e-05, 2.13541902e-11])
In [22]:
explained_var = pca.explained_variance_ratio_
cum_var_exp = np.cumsum(explained_var)
plt.figure(figsize=[10,8])
plt.bar([f'pc {i}' for i in range(len(var_used))],explained_var,label='individual explained variance')
plt.plot([f'pc {i}' for i in range(len(var_used))],cum_var_exp,'or--',label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel("Principal components")
for i,j in zip([f'pc {i}' for i in range(len(var_used))],cum_var_exp):
    plt.text(i,j+0.01,f'{j:.2f}')
plt.legend()
plt.show()

It can be seen that with first five principal component contains 99% of variance so we will use the first five PCA variable for the rest of the lab. The data is frist split into two groups with 80% training data and the 20% remaining data is split again into validation set (16% of the total data) and test set (4% of the total data).

In [23]:
x_train, x_test, y_train, y_test = train_test_split(principalDf[principalDf.columns[:5]], y, test_size = 0.20, random_state = 42)

Training set is 80000 events

In [24]:
print("x_train: {}".format(x_train.shape))
print("x_test: {}".format(x_test.shape))
print("y_train: {}".format(y_train.shape))
print("y_test: {}".format(y_test.shape))
x_train: (80000, 5)
x_test: (20000, 5)
y_train: (80000,)
y_test: (20000,)
In [25]:
x_train1, x_test1, y_train1, y_test1 = train_test_split(x_test, y_test, test_size = 0.20, random_state = 42)

Validation set is 16000 events and test set is 4000 events

In [26]:
print("x_train1: {}".format(x_train1.shape))
print("x_test1: {}".format(x_test1.shape))
print("y_train1: {}".format(y_train1.shape))
print("y_test1: {}".format(y_test1.shape))
x_train1: (16000, 5)
x_test1: (4000, 5)
y_train1: (16000,)
y_test1: (4000,)
In [27]:
sampled_df = pd.concat([ finalDf[finalDf.target==0].sample(20000,random_state=4), finalDf[finalDf.target==1],finalDf[finalDf.target==2]]) 
sampled_df
Out[27]:
principal component 0 principal component 1 principal component 2 principal component 3 principal component 4 principal component 5 principal component 6 principal component 7 principal component 8 principal component 9 principal component 10 target
15127 3.927942 -0.131526 0.098265 -1.293115 -0.774390 -2.027414 -0.067764 -0.071229 -0.011580 -0.002806 -7.262944e-07 0
12826 2.372069 -0.177286 -0.349803 -0.161689 0.160650 -0.210765 -0.001420 0.005521 0.004299 -0.000557 -9.437940e-06 0
76148 -0.495109 0.067254 -0.639750 -1.061104 0.540502 -0.169412 -0.167567 -0.232574 -0.019388 -0.010633 1.527393e-06 0
93367 0.299716 -0.040470 -0.287279 -0.705149 0.362738 0.104917 -0.316504 0.124808 0.011249 -0.012207 -1.846857e-05 0
42249 -0.176417 -0.021137 -0.135034 0.038190 0.440420 0.576967 -0.146799 0.009950 -0.021171 -0.008692 -4.618513e-06 0
... ... ... ... ... ... ... ... ... ... ... ... ...
99931 -1.863019 0.067474 -1.058208 0.740993 -1.377913 1.053874 -0.085301 -0.093110 0.035146 -0.002422 3.521745e-06 2
99939 -0.607371 0.010801 -1.463535 0.628442 -1.329758 0.417808 0.053383 0.154702 0.064175 0.027732 1.207364e-05 2
99941 -2.970424 0.280426 -1.048623 -0.667954 -1.464926 0.529926 0.262944 -0.342760 -0.021050 -0.006772 1.737134e-05 2
99955 0.838066 -0.161592 1.500702 0.898376 -0.828079 0.903323 0.008228 0.268517 -0.006388 0.007206 -1.869723e-05 2
99959 -0.518500 0.004636 0.602540 0.095623 -0.671773 0.963027 0.036862 0.260653 -0.005240 0.005512 8.136588e-06 2

60555 rows × 12 columns

In [28]:
ig, ax = plt.subplots()  
bars=plt.bar(le.classes_, [len(sampled_df[sampled_df.target==i]) for i in range(len(le.classes_)) ])
for index,data in enumerate([len(sampled_df[sampled_df.target==i]) for i in range(len(le.classes_)) ]):
    plt.text(x=index,y=10000,s=f'{data/len(target)*100:.0f}%',c='white',va='center', fontweight='bold')
plt.title("Class")
plt.show()
In [29]:
sampled_y = sampled_df.pop('target')

So we have another dataset with equal number of observations and lets split this dataset into 60% training 20% validataion and 20% testing set.

In [30]:
x_trains, x_tests, y_trains, y_tests = train_test_split(sampled_df[sampled_df.columns[:5]], sampled_y, test_size = 0.60, random_state = 42)
print("x_train: {}".format(x_trains.shape))
print("y_train: {}".format(y_trains.shape))
x_trainsv,x_testsv, y_trainsv, y_testsv = train_test_split(x_tests, y_tests, test_size = 0.50, random_state = 42)
print("x_val : {}".format(x_trainsv.shape))
print("y_val: {}".format(y_trainsv.shape))

print("x_test: {}".format(x_testsv.shape))
print("y_test: {}".format(y_testsv.shape))
x_train: (24222, 5)
y_train: (24222,)
x_val : (18166, 5)
y_val: (18166,)
x_test: (18167, 5)
y_test: (18167,)
In [31]:
%%time
# from last time, our logistic regression algorithm is given by (including everything we previously had):
class BinaryLogisticRegression:
    def __init__(self, eta, iterations=20, C=0.001,do_C=False,do_alpha =False,alpha=0.001,do_C_alpha=False,sample_weight=False):
        self.eta = eta
        self.iters = iterations
        self.C = C
        self.do_C = do_C
        self.alpha = alpha
        self.do_alpha = do_alpha
        self.do_C_alpha = do_C_alpha
        self.sample_weight = sample_weight
        # internally we will store the weights as self.w_ to keep with sklearn conventions
        
    def __str__(self):
        if(hasattr(self,'w_')):
            return 'Binary Logistic Regression Object with coefficients:\n'+ str(self.w_) # is we have trained the object
        else:
            return 'Untrained Binary Logistic Regression Object'
        
    # convenience, private:
    @staticmethod
    def _add_bias(X):
        return np.hstack((np.ones((X.shape[0],1)),X)) # add bias term
    
    @staticmethod
    def _sigmoid(theta):
        # increase stability, redefine sigmoid operation
        return expit(theta) #1/(1+np.exp(-theta))
    
    @staticmethod
    def get_sample_weight(y):
        sw = [len(y)/(len(np.unique(y))*i) for i in np.bincount(y) ]
        return np.array([sw[j] for j in y])
        

    # vectorized gradient calculation with regularization using L2 Norm and L1 Norm 
    def _get_gradient(self,X,y):
        ydiff = y-self.predict_proba(X,add_bias=False).ravel() # get y difference
        gradient = np.mean(X * ydiff[:,np.newaxis], axis=0) # make ydiff a column vector and multiply through
        
        gradient = gradient.reshape(self.w_.shape)
        if self.do_C:   
            gradient[1:] += -2 * self.w_[1:] * self.C
        if self.do_alpha:
            # from  https://stats.stackexchange.com/questions/45643/why-l1-norm-for-sparse-models
            gradient[1:] += - self.alpha * np.sign(self.w_[1:])
        if self.do_C_alpha:
            gradient[1:] += - self.alpha * np.sign(self.w_[1:]) - 2 * self.w_[1:] * self.C

        return gradient
    
    # public:
    def predict_proba(self,X,add_bias=True):
        # add bias term if requested
        Xb = self._add_bias(X) if add_bias else X
        return self._sigmoid(Xb @ self.w_) # return the probability y=1
    
    def predict(self,X):
        return (self.predict_proba(X)>0.5) #return the actual prediction
    
    
    def fit(self, X, y):
        Xb = self._add_bias(X) # add bias term
        num_samples, num_features = Xb.shape
        
        self.w_ = np.zeros((num_features,1)) # init weight vector to zeros
        # print(self.get_sample_weight(y))
        
        # for as many as the max iterations
        for _ in range(self.iters):
            gradient = self._get_gradient(Xb,y)
            self.w_ += gradient*self.eta  # multiply by learning rate 
            # self.w_ *= self.get_sample_weight(y) # multiply weight with sample weight
            # add bacause maximizing 

blr = BinaryLogisticRegression(eta=0.1,iterations=50,C=0.01,alpha=0.01,do_C_alpha=True)

blr.fit(x_trains,y_trains)
print(blr)

yhat = blr.predict(x_trainsv)
print('Accuracy of: ',accuracy_score(y_trainsv,yhat))
Binary Logistic Regression Object with coefficients:
[[ 1.63490279]
 [ 0.22560623]
 [ 0.09769785]
 [-0.03628907]
 [-0.18932306]
 [-0.19368032]]
Accuracy of:  0.30986458218650226
CPU times: user 1.86 s, sys: 898 ms, total: 2.76 s
Wall time: 82.1 ms

Line search algrothim:

initate weight vector 
for line in lines:
    get gradient
    save_log = {}
    for eta in range(0,1,iter):
        get log likelihood using the graident, eta and save it in a dictanory 
        save_log[eta_i] = log likelihood
    get the minimum value of log likelihood in the eta range
    use the eta value that gives the minimum log likelihood 
    and use the eta*gradient to update the weights
In [32]:
%%time
# and we can update this to use a line search along the gradient like this:
from scipy.optimize import minimize_scalar
import copy
from numpy import ma # (masked array) this has most numpy functions that work with NaN data.
class LineSearchLogisticRegression(BinaryLogisticRegression):
    
    # define custom line search for problem
    def __init__(self, line_iters=0.0,do_plot=False, **kwds):        
        self.line_iters = line_iters
        self.do_plot = do_plot

        # but keep other keywords
        super().__init__(**kwds) # call parent initializer
    
    # this defines the function with the first input to be optimized
    # therefore eta will be optimized, with all inputs constant
    
    # https://stackoverflow.com/questions/21610198/runtimewarning-divide-by-zero-encountered-in-log by Chiraz BenAbdelkader Jul 1, 2020 
    @staticmethod
    def safe_log(x, eps=1e-10):     
        result = np.where(x > eps, x, eps)     
        np.log(result, out=result, where=result > 0)     
        return result     
    
#     @staticmethod
    def objective_function(self,eta,X,y,w,grad):
        wnew = w - eta*(grad)

        gi = expit(X @ wnew).ravel()
        sw =  self.get_sample_weight(y) # calculate sample weight of the class
        loglike = - y*self.safe_log(gi) - (np.ones(y.shape)-y)*self.safe_log(1-gi) 
#         print(loglike)
        if self.do_C:   
            loglike[1:] +=  self.C*np.sum(wnew[1:]**2 )
        if self.do_alpha:
    
            loglike[1:] +=  self.alpha *np.sum(np.absolute(wnew[1:]))
        if self.do_C_alpha:
            loglike[1:] +=  self.alpha *np.sum(np.absolute(wnew[1:]))+ self.C*np.sum(wnew[1:]**2)
        if self.sample_weight:return np.sum(loglike*sw)
        else: return np.sum(loglike)

    def fit(self, X, y):
        Xb = self._add_bias(X) # add bias term
        num_samples, num_features = Xb.shape
        
        self.w_ = np.zeros((num_features,1)) # init weight vector to zeros
        for l in range(int(self.line_iters)):
            # print('line ',l)
            # temp_weight = np.zeros((num_features,1))
            gradient = -self._get_gradient(Xb,y)
            objective_dict={}
            for eta_i in np.linspace(0,1,self.iters):
                # print('eta i ', eta_i)
                # print('objective grad',self.objective_gradient(self.w_,Xb,y,self.C))
                # print('objective function ',self.objective_function(eta_i,Xb,y,self.w_,gradient,self.C))
                # print('grad ', gradient)
                objective_dict[eta_i] = self.objective_function(eta_i,Xb,y,self.w_,gradient)
            # assert False, print(objective_dict)
            min_eta = min(objective_dict, key=objective_dict.get)
            # print('min_eta',min_eta)
            min_logloss = min(objective_dict.values())
            # print('min_logloss',min_logloss)
            
            self.w_ -= gradient*min_eta
                
            if self.do_plot:
                plt.figure()
                plt.plot(objective_dict.keys(),objective_dict.values())
                plt.title(f'line {l} '+ f'min loss likelihood {min_logloss:.2f}, eta : {min_eta:.5f}')
                # plt.text(0.5, 10,f'min loss likelihood {min_logloss:.2f}, eta : {min_eta:.5f}',horizontalalignment='center',verticalalignment='center')
                plt.xlabel('eta')
                plt.ylabel('log likelihood')
                plt.show()
                
            
CPU times: user 61 µs, sys: 0 ns, total: 61 µs
Wall time: 72.7 µs

Here testing the line search, we see that with each line we minimize the log likelihood

In [262]:
%%time
lslr = LineSearchLogisticRegression(eta=0.01, # initial eta is not used. just checks different eta in the direction of eta
                                    iterations=50, # this is important because it is how many uniformaly distrubited eta values are tested so, eta_range = np.linspace(0,1,iterations) 
                                    line_iters=10, 
                                    C=20,
                                    alpha=10,
#                                     do_plot=True,
                                    do_C=True,
                                    do_plot=True,
                                    sample_weight=False
                                    )
lslr.fit(x_trains,y_trains)
yhat = lslr.predict(x_trainsv)
print(lslr)
print('Accuracy of: ',accuracy_score(y_trainsv,yhat))
Binary Logistic Regression Object with coefficients:
[[ 0.4812899 ]
 [ 0.00741682]
 [ 0.00150457]
 [-0.00134687]
 [-0.00435437]
 [-0.00401995]]
Accuracy of:  0.31410326984476494
CPU times: user 2min 22s, sys: 1min 47s, total: 4min 10s
Wall time: 7.87 s
In [33]:
class LineSearchLogisticRegressionMulit:
    def __init__(self, eta, iterations=20,line_iters=4,C=0.001,alpha=0.001,do_C=False,do_alpha =False,do_C_alpha=False,do_plot=False,sample_weight=False):
        self.eta = eta
        self.iters = iterations
        self.line_iters = line_iters
        self.C = C
        self.do_C = do_C
        self.alpha = alpha
        self.do_alpha = do_alpha
        self.do_C_alpha = do_C_alpha
        self.do_plot = do_plot
        self.sample_weight=sample_weight
        # internally we will store the weights as self.w_ to keep with sklearn conventions
    
    def __str__(self):
        if(hasattr(self,'w_')):
            return 'MultiClass Logistic Regression Object with coefficients:\n'+ str(self.w_) # is we have trained the object
        else:
            return 'Untrained MultiClass Logistic Regression Object'
        
    def fit(self,X,y):
        num_samples, num_features = X.shape
        self.unique_ = np.unique(y) # get each unique class value
        num_unique_classes = len(self.unique_)
        self.classifiers_ = [] # will fill this array with binary classifiers
        
        for i,yval in enumerate(self.unique_): # for each unique value
            y_binary = (y==yval) # create a binary problem
            # train the binary classifier for this class
            blr = LineSearchLogisticRegression(eta=self.eta,
                                                iterations = self.iters,
                                                line_iters=self.line_iters,
                                                C=self.C,
                                                alpha=self.alpha,
                                                do_C=self.do_C,
                                                do_alpha=self.do_alpha,
                                                do_C_alpha=self.do_C_alpha,
                                                do_plot=self.do_plot,
                                                sample_weight = self.sample_weight
                                                )
            blr.fit(X,y_binary)
            # add the trained classifier to the list
            self.classifiers_.append(blr)
            
        # save all the weights into one matrix, separate column for each class
        self.w_ = np.hstack([x.w_ for x in self.classifiers_]).T
        
    def predict_proba(self,X):
        probs = []
        for blr in self.classifiers_:
            probs.append(blr.predict_proba(X)) # get probability for each classifier
        
        return np.hstack(probs) # make into single matrix
    
    def predict(self,X):
        return self.unique_[np.argmax(self.predict_proba(X),axis=1)] # take argmax along row
    
# lr = LineSearchLogisticRegressionMulit(0.1,10)
# print(lr)
In [277]:
%%time
lslr = LineSearchLogisticRegressionMulit(eta=0.1,
                                    iterations=20, # this is important because it is how many uniformaly distrubited eta values are tested so, eta_range = np.linspace(0,1,iterations) 
                                    line_iters=10, 
                                    C=20,
                                    alpha=4,
#                                     do_plot=True,
                                    do_C=True,
                                    # do_plot=True,
                                    # sample_weight=True
                                    )
lslr.fit(x_trains,y_trains)
yhat = lslr.predict(x_trainsv)
print(lslr)
print('Accuracy of: ',accuracy_score(y_trainsv,yhat))
MultiClass Logistic Regression Object with coefficients:
[[-0.03494374  0.02211448 -0.00502727 -0.00011531 -0.00519918 -0.00909807]
 [-0.03840747  0.00989873 -0.00026786 -0.00029524 -0.00234671 -0.00314211]
 [-0.02266616  0.02422415 -0.00104604 -0.00139043 -0.00778497 -0.01064835]]
Accuracy of:  0.5801497302653308
CPU times: user 2min 14s, sys: 1min 41s, total: 3min 56s
Wall time: 6.59 s
In [278]:
param = {
    "iterations": 20,
    "line_iters": 2,
    "alpha": 0.01,
    "eta" : 0.1,
    "C": 10,
        "do_C_alpha":True ,
#         'sample_weight': True,
        }
clf = LineSearchLogisticRegressionMulit(**param)
clf.fit(x_trains,y_trains)
yhat = clf.predict(x_trainsv)
acc = accuracy_score(y_trainsv,yhat)

Using the optuna to optimize the hyperparameters C, alpha and no. of line. This is the sampled dataset i.e 20k events were sampled from Galaxy.

In [280]:
%%time
study_name = "LineSearchLogisticRegression"  # Unique identifier of the study.
CV_RESULT_DIR = os.getcwd()+f"/{study_name}/"
if not os.path.exists(CV_RESULT_DIR):  os.mkdir(CV_RESULT_DIR)
storage_name = "sqlite:///{}.db".format(study_name)

def objective(trial):
    param = {
        "iterations": 20,
        "line_iters": trial.suggest_int("line_iters", 2, 100, log=True),
        "alpha": trial.suggest_float("alpha", 1e-8, 40., log=True),
        "eta" : 0.1,
        "C": trial.suggest_float("C", 1e-8, 40, log=True),
#         "do_C_alpha": ,
#         'sample_weight': True,
            }
    clf = LineSearchLogisticRegressionMulit(**param)
    clf.fit(x_trains,y_trains)
    yhat = clf.predict(x_trainsv)
    acc = accuracy_score(y_trainsv,yhat)
    return acc


# pruner = optuna.pruners.MedianPruner(n_warmup_steps=5)
# pruner = optuna.pruners.HyperbandPruner()
study = optuna.create_study(direction="maximize",storage=storage_name,study_name=study_name)
study.optimize(objective, n_trials=40)

print("Best trial:")
trial = study.best_trial

print("  Value: {}".format(trial.value))

print("  Params: ")
for key, value in trial.params.items():
    print("    {}: {}".format(key, value))
[I 2022-10-18 11:36:01,847] A new study created in RDB with name: LineSearchLogisticRegression
[I 2022-10-18 11:36:03,283] Trial 0 finished with value: 0.5688098645821865 and parameters: {'line_iters': 2, 'alpha': 1.992074558187814, 'C': 2.101027256721947e-08}. Best is trial 0 with value: 0.5688098645821865.
[I 2022-10-18 11:36:05,280] Trial 1 finished with value: 0.5825167896069581 and parameters: {'line_iters': 3, 'alpha': 0.02024831421136299, 'C': 0.014341269172402027}. Best is trial 1 with value: 0.5825167896069581.
[I 2022-10-18 11:36:09,170] Trial 2 finished with value: 0.6099306396565012 and parameters: {'line_iters': 6, 'alpha': 0.017841151855427825, 'C': 0.17996696977500062}. Best is trial 2 with value: 0.6099306396565012.
[I 2022-10-18 11:36:11,785] Trial 3 finished with value: 0.5940768468567654 and parameters: {'line_iters': 4, 'alpha': 0.03247665676278019, 'C': 1.2121273237595798}. Best is trial 2 with value: 0.6099306396565012.
[I 2022-10-18 11:36:19,391] Trial 4 finished with value: 0.6322250357811295 and parameters: {'line_iters': 12, 'alpha': 4.067193490672849e-08, 'C': 0.7420392738689418}. Best is trial 4 with value: 0.6322250357811295.
[I 2022-10-18 11:37:00,482] Trial 5 finished with value: 0.6526478035891226 and parameters: {'line_iters': 66, 'alpha': 2.3070955979181402e-07, 'C': 5.906612687079346}. Best is trial 5 with value: 0.6526478035891226.
[I 2022-10-18 11:37:17,387] Trial 6 finished with value: 0.6451062424309149 and parameters: {'line_iters': 27, 'alpha': 1.7504788436274744e-07, 'C': 1.5758307964780866e-08}. Best is trial 5 with value: 0.6526478035891226.
[I 2022-10-18 11:37:18,752] Trial 7 finished with value: 0.5688098645821865 and parameters: {'line_iters': 2, 'alpha': 0.00019862009833119984, 'C': 0.014475085478231895}. Best is trial 5 with value: 0.6526478035891226.
[I 2022-10-18 11:37:20,739] Trial 8 finished with value: 0.5825167896069581 and parameters: {'line_iters': 3, 'alpha': 0.008683640657029385, 'C': 7.859527356776314e-07}. Best is trial 5 with value: 0.6526478035891226.
[I 2022-10-18 11:37:37,048] Trial 9 finished with value: 0.6450511945392492 and parameters: {'line_iters': 26, 'alpha': 0.06964152043785395, 'C': 2.5817336932117137e-06}. Best is trial 5 with value: 0.6526478035891226.
[I 2022-10-18 11:38:39,725] Trial 10 finished with value: 0.6555653418474072 and parameters: {'line_iters': 100, 'alpha': 4.940325121692573e-06, 'C': 26.48693164436823}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:39:40,075] Trial 11 finished with value: 0.6550699108224155 and parameters: {'line_iters': 96, 'alpha': 6.461986981588828e-06, 'C': 21.955174495278477}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:40:36,856] Trial 12 finished with value: 0.654739623472421 and parameters: {'line_iters': 90, 'alpha': 2.7668575723091027e-05, 'C': 19.2342658925028}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:41:04,573] Trial 13 finished with value: 0.6487944511725201 and parameters: {'line_iters': 44, 'alpha': 7.515837711285177e-06, 'C': 0.0003932364407272854}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:42:07,409] Trial 14 finished with value: 0.6555653418474072 and parameters: {'line_iters': 100, 'alpha': 3.9292785243678014e-06, 'C': 31.727912731433193}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:42:37,388] Trial 15 finished with value: 0.6488494990641859 and parameters: {'line_iters': 47, 'alpha': 1.4611602575326184e-06, 'C': 0.0003420535091063364}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:42:48,883] Trial 16 finished with value: 0.6406473632059893 and parameters: {'line_iters': 18, 'alpha': 1.0076428705298336e-08, 'C': 0.04249335441808249}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:42:55,316] Trial 17 finished with value: 0.6275459649895408 and parameters: {'line_iters': 10, 'alpha': 0.0003292110878284383, 'C': 31.257890077878525}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:43:25,059] Trial 18 finished with value: 0.6488494990641859 and parameters: {'line_iters': 47, 'alpha': 9.020322547486528e-05, 'C': 3.0552351624114416e-05}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:43:43,428] Trial 19 finished with value: 0.6459870086975669 and parameters: {'line_iters': 29, 'alpha': 33.5175139882656, 'C': 1.2018557668629368}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:44:23,252] Trial 20 finished with value: 0.6519872288891335 and parameters: {'line_iters': 63, 'alpha': 1.0974469114780121e-06, 'C': 0.10961121458553665}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:45:22,752] Trial 21 finished with value: 0.6548497192557525 and parameters: {'line_iters': 94, 'alpha': 3.976284174055894e-06, 'C': 37.723266353162266}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:46:20,016] Trial 22 finished with value: 0.6546845755807552 and parameters: {'line_iters': 91, 'alpha': 1.9388833694447074e-05, 'C': 5.147398642306514}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:47:03,081] Trial 23 finished with value: 0.652757899372454 and parameters: {'line_iters': 68, 'alpha': 0.0017553423663732527, 'C': 3.8145170515053053}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:47:27,058] Trial 24 finished with value: 0.6481338764725311 and parameters: {'line_iters': 38, 'alpha': 4.220060022175129e-07, 'C': 0.0031287710949867526}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:48:09,224] Trial 25 finished with value: 0.6525927556974568 and parameters: {'line_iters': 67, 'alpha': 6.39519172912836e-05, 'C': 0.3486901639240936}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:48:14,383] Trial 26 finished with value: 0.6211053616646482 and parameters: {'line_iters': 8, 'alpha': 0.0012467637103492294, 'C': 8.705285110335753}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:48:25,807] Trial 27 finished with value: 0.6406473632059893 and parameters: {'line_iters': 18, 'alpha': 4.978721190161967e-06, 'C': 1.7658160357116282}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:49:27,100] Trial 28 finished with value: 0.6551249587140813 and parameters: {'line_iters': 98, 'alpha': 3.774622087313509e-08, 'C': 33.628049714545796}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:50:02,928] Trial 29 finished with value: 0.6510514147308158 and parameters: {'line_iters': 57, 'alpha': 2.4207021688406685e-08, 'C': 1.1555856477813242e-07}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:50:24,301] Trial 30 finished with value: 0.6474733017725421 and parameters: {'line_iters': 34, 'alpha': 7.230167672454825e-08, 'C': 4.083927505544969e-05}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:51:26,326] Trial 31 finished with value: 0.6552350544974127 and parameters: {'line_iters': 99, 'alpha': 8.242049956989379e-07, 'C': 10.530384830662383}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:52:27,541] Trial 32 finished with value: 0.654959815039084 and parameters: {'line_iters': 97, 'alpha': 1.0858771230423055e-06, 'C': 8.61990862972625}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:53:15,090] Trial 33 finished with value: 0.6536937135307718 and parameters: {'line_iters': 76, 'alpha': 1.4777214999995712e-07, 'C': 0.07755960662863745}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:53:51,268] Trial 34 finished with value: 0.651216558405813 and parameters: {'line_iters': 58, 'alpha': 1.0628422394413027e-08, 'C': 0.5067587283951321}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:54:03,974] Trial 35 finished with value: 0.6415831773643069 and parameters: {'line_iters': 20, 'alpha': 4.944641461520673e-07, 'C': 2.805204300018831}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:54:51,831] Trial 36 finished with value: 0.6536937135307718 and parameters: {'line_iters': 76, 'alpha': 4.365937553837992e-08, 'C': 0.35128658949010466}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:55:24,570] Trial 37 finished with value: 0.6497853132225035 and parameters: {'line_iters': 52, 'alpha': 1.8831468899595794e-06, 'C': 37.00016479735192}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:55:27,854] Trial 38 finished with value: 0.6031047010899483 and parameters: {'line_iters': 5, 'alpha': 2.1630228698115314e-05, 'C': 2.1180069006930995}. Best is trial 10 with value: 0.6555653418474072.
[I 2022-10-18 11:56:16,966] Trial 39 finished with value: 0.6539139050974347 and parameters: {'line_iters': 78, 'alpha': 1.0995063560441618e-07, 'C': 0.022462744485301115}. Best is trial 10 with value: 0.6555653418474072.
Best trial:
  Value: 0.6555653418474072
  Params: 
    C: 26.48693164436823
    alpha: 4.940325121692573e-06
    line_iters: 100
CPU times: user 6h 53min 14s, sys: 5h 14min 53s, total: 12h 8min 7s
Wall time: 20min 15s

Not great performance we can 65% accuracy

In [125]:
study_name = "LineSearchLogisticRegression"  # Unique identifier of the study.
CV_RESULT_DIR = os.getcwd()+f"/{study_name}/"
if not os.path.exists(CV_RESULT_DIR):  os.mkdir(CV_RESULT_DIR)
storage_name = "sqlite:///{}.db".format(study_name)
study = optuna.load_study(study_name=study_name, storage=storage_name)
trial = study.best_trial
best_params = study.best_params

History of the trials

In [126]:
plot_optimization_history(study).show()

3D plot of hyperparameters vs accuracy

In [127]:
plot_contour(study, params=['C','alpha']).show()
In [128]:
plot_contour(study, params=['C','line_iters']).show()
In [129]:
plot_contour(study, params=['alpha','line_iters']).show()
In [130]:
plot_slice(study)
In [131]:
plot_parallel_coordinate(study)
In [299]:
# linear boundaries visualization from sklearn documentation
from matplotlib import pyplot as plt
import copy
%matplotlib inline
plt.style.use('ggplot')

def plot_decision_boundaries(lr,Xin,y,title=''):
    Xb = copy.deepcopy(Xin)
    lr.fit(Xb[:,:2],y) # train only on two features

    h=0.01
    # create a mesh to plot in
    x_min, x_max = Xb[:, 0].min() - 1, Xb[:, 0].max() + 1
    y_min, y_max = Xb[:, 1].min() - 1, Xb[:, 1].max() + 1
    xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
                         np.arange(y_min, y_max, h))

    # get prediction values
    Z = lr.predict(np.c_[xx.ravel(), yy.ravel()])

    # Put the result into a color plot
    Z = Z.reshape(xx.shape)
    plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.5)

    # Plot also the training points
    plt.scatter(Xb[:, 0], Xb[:, 1], c=y, cmap=plt.cm.Paired)
    plt.xlabel('PCA 0')
    plt.ylabel('PCA 1')
    plt.xlim(xx.min(), xx.max())
    plt.ylim(yy.min(), yy.max())
    plt.xticks(())
    plt.yticks(())
    plt.title(title)
    plt.show()
    
# lr = LogisticRegression(0.1,1500) # this is still OUR LR implementation, not sklearn
lslr = LineSearchLogisticRegressionMulit(eta=0.1, # initial eta is not used. just checks different eta in the direction of eta
                                    iterations=20, # this is important because it is how many uniformaly distrubited eta values are tested so, eta_range = np.linspace(0,1,iterations) 
                                    line_iters=100, 
                                    C=26.48693164436823,
                                    alpha=4.940325121692573e-06,
#                                     do_plot=True,
                                    do_C_alpha=True,
                                    # do_plot=True,
                                    # sample_weight=True
                                    )
plot_decision_boundaries(lslr,x_trainsv.to_numpy(),y_trainsv)
In [311]:
study_name="LineSearchLogisticRegression"
storage_name = "sqlite:///{}.db".format(study_name)
study = optuna.load_study(study_name=study_name, storage=storage_name)
trial = study.best_trial
best_params = study.best_params
best_params["do_C_alpha"] = True
clf = LineSearchLogisticRegressionMulit(**best_params,eta=0.1,iterations=20)
clf.fit(x_trains.to_numpy(),y_trains)
yhat  = clf.predict_proba(x_testsv)
ytrainhat = clf.predict_proba(x_trains)
yvalhat = clf.predict_proba(x_trainsv)

plt.figure(figsize=(15,15))
plt.subplot(3,2,1)
plot_sigbkg(0,"GALAXY",y_trains,ytrainhat,y_trainsv,yvalhat,y_testsv,yhat) 
plt.subplot(3,2,2)
plot_roc(0,"GALAXY",y_trains,ytrainhat,y_trainsv,yvalhat,y_testsv,yhat) 
# plt.show()
plt.subplot(323)
plot_sigbkg(1,"QSO",y_trains,ytrainhat,y_trainsv,yvalhat,y_testsv,yhat) 
plt.subplot(324)
plot_roc(1,"QSO",y_trains,ytrainhat,y_trainsv,yvalhat,y_testsv,yhat)   
# plt.show()
plt.subplot(325)
plot_sigbkg(2,"STAR",y_trains,ytrainhat,y_trainsv,yvalhat,y_testsv,yhat) 
plt.subplot(326)
plot_roc(2,"STAR",y_trains,ytrainhat,y_trainsv,yvalhat,y_testsv,yhat) 
plt.show()

Using the optuna to optimize the hyperparameters C, alpha and no. of line. This is without sampling but using sample weight, much better performance, but galaxy is not classified.

In [112]:
%%time
study_name = "LineSearchLogisticRegressionsw"  # Unique identifier of the study.
CV_RESULT_DIR = os.getcwd()+f"/{study_name}/"
if not os.path.exists(CV_RESULT_DIR):  os.mkdir(CV_RESULT_DIR)
storage_name = "sqlite:///{}.db".format(study_name)

def objective(trial):
    param = {
        "iterations": 20,
        "line_iters": trial.suggest_int("line_iters", 2, 100, log=True),
        "alpha": trial.suggest_float("alpha", 1e-8, 40., log=True),
        "eta" : 0.1,
        "C": trial.suggest_float("C", 1e-8, 40, log=True),
#         "do_C_alpha": ,
        'sample_weight': True,
            }
    clf = LineSearchLogisticRegressionMulit(**param)
    clf.fit(x_train,y_train)
    yhat = clf.predict(x_train1)
    acc = accuracy_score(y_train1,yhat)
    return acc


# pruner = optuna.pruners.MedianPruner(n_warmup_steps=5)
# pruner = optuna.pruners.HyperbandPruner()
study = optuna.create_study(direction="maximize",storage=storage_name,study_name=study_name)
study.optimize(objective, n_trials=50)

print("Best trial:")
trial = study.best_trial

print("  Value: {}".format(trial.value))

print("  Params: ")
for key, value in trial.params.items():
    print("    {}: {}".format(key, value))
---------------------------------------------------------------------------
IntegrityError                            Traceback (most recent call last)
/scratch/users/sbataju/virtualenv/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py in _execute_context(self, dialect, constructor, statement, parameters, execution_options, *args, **kw)
   1802                     self.dialect.do_execute(
-> 1803                         cursor, statement, parameters, context
   1804                     )

/scratch/users/sbataju/virtualenv/venv/lib/python3.6/site-packages/sqlalchemy/engine/default.py in do_execute(self, cursor, statement, parameters, context)
    731     def do_execute(self, cursor, statement, parameters, context=None):
--> 732         cursor.execute(statement, parameters)
    733 

IntegrityError: UNIQUE constraint failed: studies.study_name

The above exception was the direct cause of the following exception:

IntegrityError                            Traceback (most recent call last)
/scratch/users/sbataju/virtualenv/venv/lib/python3.6/site-packages/optuna/storages/_rdb/storage.py in create_new_study(self, study_name)
    224                 study = models.StudyModel(study_name=study_name, directions=[direction])
--> 225                 session.add(study)
    226         except IntegrityError:

/hpc/applications/anaconda/3/lib/python3.6/contextlib.py in __exit__(self, type, value, traceback)
     87             try:
---> 88                 next(self.gen)
     89             except StopIteration:

/scratch/users/sbataju/virtualenv/venv/lib/python3.6/site-packages/optuna/storages/_rdb/storage.py in _create_scoped_session(scoped_session, ignore_integrity_error)
     53         yield session
---> 54         session.commit()
     55     except IntegrityError as e:

/scratch/users/sbataju/virtualenv/venv/lib/python3.6/site-packages/sqlalchemy/orm/session.py in commit(self)
   1430 
-> 1431         self._transaction.commit(_to_root=self.future)
   1432 

/scratch/users/sbataju/virtualenv/venv/lib/python3.6/site-packages/sqlalchemy/orm/session.py in commit(self, _to_root)
    828         if self._state is not PREPARED:
--> 829             self._prepare_impl()
    830 

/scratch/users/sbataju/virtualenv/venv/lib/python3.6/site-packages/sqlalchemy/orm/session.py in _prepare_impl(self)
    807                     break
--> 808                 self.session.flush()
    809             else:

/scratch/users/sbataju/virtualenv/venv/lib/python3.6/site-packages/sqlalchemy/orm/session.py in flush(self, objects)
   3362             self._flushing = True
-> 3363             self._flush(objects)
   3364         finally:

/scratch/users/sbataju/virtualenv/venv/lib/python3.6/site-packages/sqlalchemy/orm/session.py in _flush(self, objects)
   3502             with util.safe_reraise():
-> 3503                 transaction.rollback(_capture_exception=True)
   3504 

/scratch/users/sbataju/virtualenv/venv/lib/python3.6/site-packages/sqlalchemy/util/langhelpers.py in __exit__(self, type_, value, traceback)
     71                     exc_value,
---> 72                     with_traceback=exc_tb,
     73                 )

/scratch/users/sbataju/virtualenv/venv/lib/python3.6/site-packages/sqlalchemy/util/compat.py in raise_(***failed resolving arguments***)
    206         try:
--> 207             raise exception
    208         finally:

/scratch/users/sbataju/virtualenv/venv/lib/python3.6/site-packages/sqlalchemy/orm/session.py in _flush(self, objects)
   3462             try:
-> 3463                 flush_context.execute()
   3464             finally:

/scratch/users/sbataju/virtualenv/venv/lib/python3.6/site-packages/sqlalchemy/orm/unitofwork.py in execute(self)
    455             for rec in topological.sort(self.dependencies, postsort_actions):
--> 456                 rec.execute(self)
    457 

/scratch/users/sbataju/virtualenv/venv/lib/python3.6/site-packages/sqlalchemy/orm/unitofwork.py in execute(self, uow)
    632             uow.states_for_mapper_hierarchy(self.mapper, False, False),
--> 633             uow,
    634         )

/scratch/users/sbataju/virtualenv/venv/lib/python3.6/site-packages/sqlalchemy/orm/persistence.py in save_obj(base_mapper, states, uowtransaction, single)
    248             table,
--> 249             insert,
    250         )

/scratch/users/sbataju/virtualenv/venv/lib/python3.6/site-packages/sqlalchemy/orm/persistence.py in _emit_insert_statements(base_mapper, uowtransaction, mapper, table, insert, bookkeeping)
   1223                             params,
-> 1224                             execution_options=execution_options,
   1225                         )

/scratch/users/sbataju/virtualenv/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py in _execute_20(self, statement, parameters, execution_options)
   1613         else:
-> 1614             return meth(self, args_10style, kwargs_10style, execution_options)
   1615 

/scratch/users/sbataju/virtualenv/venv/lib/python3.6/site-packages/sqlalchemy/sql/elements.py in _execute_on_connection(self, connection, multiparams, params, execution_options, _force)
    325             return connection._execute_clauseelement(
--> 326                 self, multiparams, params, execution_options
    327             )

/scratch/users/sbataju/virtualenv/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py in _execute_clauseelement(self, elem, multiparams, params, execution_options)
   1490             extracted_params,
-> 1491             cache_hit=cache_hit,
   1492         )

/scratch/users/sbataju/virtualenv/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py in _execute_context(self, dialect, constructor, statement, parameters, execution_options, *args, **kw)
   1845             self._handle_dbapi_exception(
-> 1846                 e, statement, parameters, cursor, context
   1847             )

/scratch/users/sbataju/virtualenv/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py in _handle_dbapi_exception(self, e, statement, parameters, cursor, context)
   2026                 util.raise_(
-> 2027                     sqlalchemy_exception, with_traceback=exc_info[2], from_=e
   2028                 )

/scratch/users/sbataju/virtualenv/venv/lib/python3.6/site-packages/sqlalchemy/util/compat.py in raise_(***failed resolving arguments***)
    206         try:
--> 207             raise exception
    208         finally:

/scratch/users/sbataju/virtualenv/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py in _execute_context(self, dialect, constructor, statement, parameters, execution_options, *args, **kw)
   1802                     self.dialect.do_execute(
-> 1803                         cursor, statement, parameters, context
   1804                     )

/scratch/users/sbataju/virtualenv/venv/lib/python3.6/site-packages/sqlalchemy/engine/default.py in do_execute(self, cursor, statement, parameters, context)
    731     def do_execute(self, cursor, statement, parameters, context=None):
--> 732         cursor.execute(statement, parameters)
    733 

IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: studies.study_name
[SQL: INSERT INTO studies (study_name) VALUES (?)]
[parameters: ('LineSearchLogisticRegressionsw',)]
(Background on this error at: https://sqlalche.me/e/14/gkpj)

During handling of the above exception, another exception occurred:

DuplicatedStudyError                      Traceback (most recent call last)
<timed exec> in <module>

/scratch/users/sbataju/virtualenv/venv/lib/python3.6/site-packages/optuna/study/study.py in create_study(storage, sampler, pruner, study_name, direction, load_if_exists, directions)
   1136     storage = storages.get_storage(storage)
   1137     try:
-> 1138         study_id = storage.create_new_study(study_name)
   1139     except exceptions.DuplicatedStudyError:
   1140         if load_if_exists:

/scratch/users/sbataju/virtualenv/venv/lib/python3.6/site-packages/optuna/storages/_cached_storage.py in create_new_study(self, study_name)
     77     def create_new_study(self, study_name: Optional[str] = None) -> int:
     78 
---> 79         study_id = self._backend.create_new_study(study_name)
     80         with self._lock:
     81             study = _StudyInfo()

/scratch/users/sbataju/virtualenv/venv/lib/python3.6/site-packages/optuna/storages/_rdb/storage.py in create_new_study(self, study_name)
    229                 "Please specify a different name, or reuse the existing one "
    230                 "by setting `load_if_exists` (for Python API) or "
--> 231                 "`--skip-if-exists` flag (for CLI).".format(study_name)
    232             )
    233 

DuplicatedStudyError: Another study with name 'LineSearchLogisticRegressionsw' already exists. Please specify a different name, or reuse the existing one by setting `load_if_exists` (for Python API) or `--skip-if-exists` flag (for CLI).
In [132]:
study_name = "LineSearchLogisticRegressionsw"  # Unique identifier of the study.
CV_RESULT_DIR = os.getcwd()+f"/{study_name}/"
if not os.path.exists(CV_RESULT_DIR):  os.mkdir(CV_RESULT_DIR)
storage_name = "sqlite:///{}.db".format(study_name)
study = optuna.load_study(study_name=study_name, storage=storage_name)
trial = study.best_trial
best_params = study.best_params
print(best_params)
{'C': 0.21094460395008038, 'alpha': 3.781323294700339, 'line_iters': 98}
In [133]:
plot_optimization_history(study).show()
In [134]:
plot_contour(study, params=['C','alpha']).show()
In [135]:
plot_contour(study, params=['C','line_iters']).show()
In [136]:
plot_contour(study, params=['alpha','line_iters']).show()
In [137]:
plot_slice(study)
In [138]:
plot_parallel_coordinate(study)
In [67]:
study_name="LineSearchLogisticRegressionsw"
storage_name = "sqlite:///{}.db".format(study_name)
study = optuna.load_study(study_name=study_name, storage=storage_name)
trial = study.best_trial
best_params = study.best_params
best_params["do_C_alpha"] = True
clf = LineSearchLogisticRegressionMulit(C= 0.21094460395008038, alpha= 3.781323294700339, line_iters= 98,eta=0.1,iterations=20,sample_weight=True)
clf.fit(x_train.to_numpy(),y_train)
yhat  = clf.predict_proba(x_test1)
ytrainhat = clf.predict_proba(x_train)
yvalhat = clf.predict_proba(x_train1)

plt.figure(figsize=(15,15))
plt.subplot(3,2,1)
plot_sigbkg(0,"GALAXY",y_train,ytrainhat,y_train1,yvalhat,y_test1,yhat) 
plt.subplot(3,2,2)
plot_roc(0,"GALAXY",y_train,ytrainhat,y_train1,yvalhat,y_test1,yhat) 
# plt.show()
plt.subplot(323)
plot_sigbkg(1,"QSO",y_train,ytrainhat,y_train1,yvalhat,y_test1,yhat) 
plt.subplot(324)
plot_roc(1,"QSO",y_train,ytrainhat,y_train1,yvalhat,y_test1,yhat)  
# plt.show()
plt.subplot(325)
plot_sigbkg(2,"STAR",y_train,ytrainhat,y_train1,yvalhat,y_test1,yhat) 
plt.subplot(326)
plot_roc(2,"STAR",y_train,ytrainhat,y_train1,yvalhat,y_test1,yhat) 
plt.show()

Here I tried to optimize the hyperparameter by hand.

In [113]:
acc_ = []
for l in range(1,20):
    lslr = LineSearchLogisticRegressionMulit(eta=0.1,
                                        iterations=10, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        line_iters=l, 
                                        C=0.08,
                                        # alpha=4,
    #                                     do_plot=True,
                                        do_C=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1,y_train1)
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(acc_)
plt.xlabel('number of lines')
plt.ylabel('accuracy')
plt.show()
In [115]:
acc_ = []
for l in range(1,10):
    lslr = LineSearchLogisticRegressionMulit(eta=0.1,
                                        iterations=10, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        line_iters=3, 
                                        C=l*0.1,
                                        # alpha=4,
    #                                     do_plot=True,
                                        do_C=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1,y_train1)
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.arange(1,10)*0.1,acc_)
plt.xlabel('C')
plt.ylabel('accuracy')
plt.show()
In [116]:
acc_ = []
for l in range(1,10):
    lslr = LineSearchLogisticRegressionMulit(eta=0.1,
                                        iterations=10, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        line_iters=3, 
                                        C=l,
                                        # alpha=4,
    #                                     do_plot=True,
                                        do_C=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1,y_train1)
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.arange(1,10),acc_)
plt.xlabel('C')
plt.ylabel('accuracy')
plt.show()
In [132]:
acc_ = []
for l in range(10,20):
    lslr = LineSearchLogisticRegressionMulit(eta=0.1,
                                        iterations=10, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        line_iters=3, 
                                        C=l,
                                        # alpha=4,
    #                                     do_plot=True,
                                        do_C=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1,y_train1)
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.arange(10,20),acc_)
plt.xlabel('C')
plt.ylabel('accuracy')
plt.show()
In [117]:
acc_ = []
for l in range(1,10):
    lslr = LineSearchLogisticRegressionMulit(eta=0.1,
                                        iterations=10, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        line_iters=3, 
                                        C=l*0.01,
                                        # alpha=4,
    #                                     do_plot=True,
                                        do_C=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1,y_train1)
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.arange(1,10)*0.01,acc_)
plt.xlabel('C')
plt.ylabel('accuracy')
plt.show()
In [121]:
acc_ = []
for l in range(1,10):
    lslr = LineSearchLogisticRegressionMulit(eta=0.1,
                                        iterations=10, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        line_iters=5, 
                                        # C=l*0.01,
                                        alpha=l*0.01,
    #                                     do_plot=True,
                                        do_alpha=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1,y_train1)
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.arange(1,10)*0.01,acc_)
plt.xlabel('alpha')
plt.ylabel('accuracy')
plt.show()
In [131]:
acc_ = []
for l in range(1,10):
    lslr = LineSearchLogisticRegressionMulit(eta=0.1,
                                        iterations=10, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        line_iters=5, 
                                        # C=l*0.01,
                                        alpha=l*0.001,
    #                                     do_plot=True,
                                        do_alpha=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1,y_train1)
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.arange(1,10)*0.001,acc_)
plt.xlabel('alpha')
plt.ylabel('accuracy')
plt.show()
In [134]:
acc__ = {}
for l in range(1,10):
    for c in range(1,10):
        lslr = LineSearchLogisticRegressionMulit(eta=0.1,
                                        iterations=10, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        line_iters=5, 
                                        C=c*0.01,
                                        alpha=l*0.001,
    #                                     do_plot=True,
                                        do_C_alpha=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
        lslr.fit(x_train1,y_train1)
        yhat = lslr.predict(x_test1)
        acc__[l,c] = accuracy_score(y_test1,yhat)

# plt.figure()
# plt.plot(np.arange(1,10)*0.001,acc_)
# plt.xlabel('alpha')
# plt.ylabel('accuracy')
# plt.show()
In [164]:
pd.DataFrame(np.reshape(list(acc__.values()),(9,9)), columns=np.arange(1,10)*0.01,index=np.arange(1,10)*0.001)
Out[164]:
0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
0.001 0.6855 0.6855 0.6870 0.6850 0.6855 0.6835 0.6835 0.6815 0.6810
0.002 0.6845 0.6865 0.6875 0.6850 0.6840 0.6835 0.6840 0.6815 0.6805
0.003 0.6855 0.6860 0.6860 0.6845 0.6835 0.6830 0.6830 0.6795 0.6800
0.004 0.6850 0.6865 0.6860 0.6840 0.6840 0.6835 0.6820 0.6810 0.6800
0.005 0.6850 0.6865 0.6865 0.6835 0.6840 0.6835 0.6815 0.6805 0.6795
0.006 0.6860 0.6860 0.6855 0.6820 0.6835 0.6820 0.6820 0.6805 0.6790
0.007 0.6860 0.6860 0.6845 0.6820 0.6830 0.6810 0.6810 0.6805 0.6790
0.008 0.6850 0.6860 0.6835 0.6820 0.6835 0.6795 0.6810 0.6800 0.6785
0.009 0.6855 0.6860 0.6830 0.6815 0.6830 0.6795 0.6805 0.6795 0.6785
In [166]:
max(acc__.values())
Out[166]:
0.6875
In [133]:
acc_ = []
for l in range(1,10):
    lslr = LineSearchLogisticRegressionMulit(eta=0.1,
                                        iterations=10, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        line_iters=5, 
                                        C=0.08,
                                        alpha=l*0.001,
    #                                     do_plot=True,
                                        do_C_alpha=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1,y_train1)
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.arange(1,10)*0.001,acc_)
plt.xlabel('alpha')
plt.ylabel('accuracy')
plt.show()
In [129]:
acc_ = []
for l in range(1,10):
    lslr = LineSearchLogisticRegressionMulit(eta=0.1,
                                        iterations=10, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        line_iters=5, 
                                        # C=l*0.01,
                                        alpha=l*0.01,
    #                                     do_plot=True,
                                        do_alpha=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1,y_train1)
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.arange(1,10)*0.01,acc_)
plt.xlabel('alpha')
plt.ylabel('accuracy')
plt.show()

Now doing Stochastic decent.

In [300]:
%%time
class StochasticLogisticRegression(BinaryLogisticRegression):
    # stochastic gradient calculation 
    def _get_gradient(self,X,y):
        idx = int(np.random.rand()*len(y)) # grab random instance
        ydiff = y[idx]-self.predict_proba(X[idx],add_bias=False) # get y difference (now scalar)
        gradient = X[idx] * ydiff[:,np.newaxis] # make ydiff a column vector and multiply through

        gradient = gradient.reshape(self.w_.shape)
        # gradient[1:] += -2 * self.w_[1:] * self.C
        if self.do_C:   
            gradient[1:] += -2 * self.w_[1:] * self.C
        if self.do_alpha:
            # from  https://stats.stackexchange.com/questions/45643/why-l1-norm-for-sparse-models
            gradient[1:] += - self.alpha * np.sign(self.w_[1:])
        if self.do_C_alpha:
            gradient[1:] += - self.alpha * np.sign(self.w_[1:]) - 2 * self.w_[1:] * self.C
        return gradient
    
    
slr = StochasticLogisticRegression(eta=0.9, iterations=1200, C=0.001,alpha=0.001,do_C_alpha=True) # take a lot more steps!!

slr.fit(x_train1.to_numpy(),y_train1.to_numpy())

yhat = slr.predict(x_test1)
print(slr)
print('Accuracy of: ',accuracy_score(y_test1,yhat))      
Binary Logistic Regression Object with coefficients:
[[ 1.11019224]
 [ 3.45659645]
 [ 4.58822344]
 [-3.67763189]
 [ 1.15974671]
 [-0.31435481]]
Accuracy of:  0.27825
CPU times: user 107 ms, sys: 16.3 ms, total: 123 ms
Wall time: 94.3 ms
In [301]:
class StochasticLogisticRegressionMulit:
    def __init__(self, eta, iterations=20,C=0.001,alpha=0.001,do_C=False,do_alpha =False,do_C_alpha=False,do_plot=False):
        self.eta = eta
        self.iters = iterations
        # self.line_iters = line_iters
        self.C = C
        self.do_C = do_C
        self.alpha = alpha
        self.do_alpha = do_alpha
        self.do_C_alpha = do_C_alpha
        self.do_plot = do_plot
        # internally we will store the weights as self.w_ to keep with sklearn conventions
    
    def __str__(self):
        if(hasattr(self,'w_')):
            return 'MultiClass Logistic Regression Object with coefficients:\n'+ str(self.w_) # is we have trained the object
        else:
            return 'Untrained MultiClass Logistic Regression Object'
        
    def fit(self,X,y):
        num_samples, num_features = X.shape
        self.unique_ = np.unique(y) # get each unique class value
        num_unique_classes = len(self.unique_)
        self.classifiers_ = [] # will fill this array with binary classifiers
        
        for i,yval in enumerate(self.unique_): # for each unique value
            y_binary = (y==yval) # create a binary problem
            # train the binary classifier for this class
            blr = StochasticLogisticRegression(eta=self.eta,
                                                iterations = self.iters,
                                                # line_iters=self.line_iters,
                                                C=self.C,
                                                alpha=self.alpha,
                                                do_C=self.do_C,
                                                do_alpha=self.do_alpha,
                                                do_C_alpha=self.do_C_alpha,
                                                # do_plot=self.do_plot 
                                                )
            blr.fit(X,y_binary)
            # add the trained classifier to the list
            self.classifiers_.append(blr)
            
        # save all the weights into one matrix, separate column for each class
        self.w_ = np.hstack([x.w_ for x in self.classifiers_]).T
        
    def predict_proba(self,X):
        probs = []
        for blr in self.classifiers_:
            probs.append(blr.predict_proba(X)) # get probability for each classifier
        
        return np.hstack(probs) # make into single matrix
    
    def predict(self,X):
        return self.unique_[np.argmax(self.predict_proba(X),axis=1)] # take argmax along row
    

Optimizing Stochastic decent my hand.

In [39]:
%%time
acc_ = []
for l in np.linspace(0.001,1,50):
    lslr = StochasticLogisticRegressionMulit(eta=l,
                                        iterations=5000, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        # line_iters=5, 
                                        # C=l*0.01,
                                        # alpha=l*0.001,
    #                                     do_plot=True,
                                        # do_alpha=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1.to_numpy(),y_train1.to_numpy())
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.linspace(0.001,1,50),acc_)
plt.xlabel('eta')
plt.ylabel('accuracy')
plt.title('eta w/ 5000 iterations ')
plt.show()
CPU times: user 2min 15s, sys: 1min 42s, total: 3min 58s
Wall time: 13.9 s
In [288]:
acc_ = []
for l in np.arange(100,5000,200):
    lslr = StochasticLogisticRegressionMulit(eta=0.1,
                                        iterations=l, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        # line_iters=5, 
                                        # C=l*0.01,
                                        # alpha=l*0.001,
    #                                     do_plot=True,
                                        # do_alpha=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1.to_numpy(),y_train1.to_numpy())
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.arange(100,5000,200),acc_)
plt.xlabel('iterations')
plt.ylabel('accuracy')
plt.title('iterations w/ eta 0.1')
plt.show()
In [289]:
acc_ = []
for l in np.arange(100,5000,200):
    lslr = StochasticLogisticRegressionMulit(eta=0.01,
                                        iterations=l, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        # line_iters=5, 
                                        # C=l*0.01,
                                        # alpha=l*0.001,
    #                                     do_plot=True,
                                        # do_alpha=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1.to_numpy(),y_train1.to_numpy())
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.arange(100,5000,200),acc_)
plt.xlabel('iterations')
plt.ylabel('accuracy')
plt.title('iterations w/ eta 0.01')
plt.show()
In [290]:
acc_ = []
for l in np.arange(100,5000,200):
    lslr = StochasticLogisticRegressionMulit(eta=0.5,
                                        iterations=l, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        # line_iters=5, 
                                        # C=l*0.01,
                                        # alpha=l*0.001,
    #                                     do_plot=True,
                                        # do_alpha=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1.to_numpy(),y_train1.to_numpy())
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.arange(100,5000,200),acc_)
plt.xlabel('iterations')
plt.ylabel('accuracy')
plt.title('iterations w/ eta 0.5')
plt.show()
In [273]:
acc_ = []
for l in range(1,100):
    lslr = StochasticLogisticRegressionMulit(eta=0.1,
                                        iterations=1000, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        # line_iters=5, 
                                        # C=l*0.01,
                                        alpha=l*0.001,
    #                                     do_plot=True,
                                        do_alpha=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1.to_numpy(),y_train1.to_numpy())
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.arange(1,100)*0.001,acc_)
plt.xlabel('alpha')
plt.ylabel('accuracy')
plt.show()
In [274]:
%%time
acc_ = []
for l in range(1,100):
    lslr = StochasticLogisticRegressionMulit(eta=0.1,
                                        iterations=1000, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        # line_iters=5, 
                                        # C=l*0.01,
                                        alpha=l*0.0001,
    #                                     do_plot=True,
                                        do_alpha=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1.to_numpy(),y_train1.to_numpy())
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.arange(1,100)*0.0001,acc_)
plt.xlabel('alpha')
plt.ylabel('accuracy')
plt.show()
CPU times: user 7.19 s, sys: 69.5 ms, total: 7.26 s
Wall time: 7.33 s
In [275]:
%%time
acc_ = []
for l in range(1,100):
    lslr = StochasticLogisticRegressionMulit(eta=0.1,
                                        iterations=1000, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        # line_iters=5, 
                                        # C=l*0.01,
                                        alpha=l*0.00001,
    #                                     do_plot=True,
                                        do_alpha=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1.to_numpy(),y_train1.to_numpy())
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.arange(1,100)*0.00001,acc_)
plt.xlabel('alpha')
plt.ylabel('accuracy')
plt.show()
CPU times: user 6.86 s, sys: 58.4 ms, total: 6.92 s
Wall time: 7.14 s
In [268]:
%%time
acc_ = []
for l in range(1,100):
    lslr = StochasticLogisticRegressionMulit(eta=0.1,
                                        iterations=1000, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        # line_iters=5, 
                                        C=l*0.01,
                                        # alpha=l*0.0001,
    #                                     do_plot=True,
                                        do_C=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1.to_numpy(),y_train1.to_numpy())
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.arange(1,100)*0.01,acc_)
plt.xlabel('C')
plt.ylabel('accuracy')
plt.show()
CPU times: user 6.85 s, sys: 39.2 ms, total: 6.89 s
Wall time: 6.92 s
In [269]:
%%time
acc_ = []
for l in range(1,100):
    lslr = StochasticLogisticRegressionMulit(eta=0.1,
                                        iterations=1000, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        # line_iters=5, 
                                        C=l*0.001,
                                        # alpha=l*0.0001,
    #                                     do_plot=True,
                                        do_C=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1.to_numpy(),y_train1.to_numpy())
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.arange(1,100)*0.001,acc_)
plt.xlabel('C')
plt.ylabel('accuracy')
plt.show()
CPU times: user 7.08 s, sys: 53.4 ms, total: 7.14 s
Wall time: 7.18 s
In [270]:
%%time
acc_ = []
for l in range(1,100):
    lslr = StochasticLogisticRegressionMulit(eta=0.1,
                                        iterations=1000, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        # line_iters=5, 
                                        C=l*0.0001,
                                        # alpha=l*0.0001,
    #                                     do_plot=True,
                                        do_C=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1.to_numpy(),y_train1.to_numpy())
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.arange(1,100)*0.0001,acc_)
plt.xlabel('C')
plt.ylabel('accuracy')
plt.show()
CPU times: user 6.99 s, sys: 48.6 ms, total: 7.04 s
Wall time: 7.08 s

I will optmize the Stochastic decent by optuna later and now let's do Stochastic decent using Hessian.

In [32]:
%%time
from numpy.linalg import pinv
class HessianBinaryLogisticRegression(BinaryLogisticRegression):
    # just overwrite gradient function
    def _get_gradient(self,X,y):
        g = self.predict_proba(X,add_bias=False).ravel() # get sigmoid value for all classes
        hessian = X.T @ np.diag(g*(1-g)) @ X - 2 * self.C # calculate the hessian

        ydiff = y-g # get y difference
        gradient = np.sum(X * ydiff[:,np.newaxis], axis=0) # make ydiff a column vector and multiply through
        gradient = gradient.reshape(self.w_.shape)
        # gradient[1:] += -2 * self.w_[1:] * self.C
        if self.do_C:   
            gradient[1:] += -2 * self.w_[1:] * self.C
        if self.do_alpha:
            # from  https://stats.stackexchange.com/questions/45643/why-l1-norm-for-sparse-models
            gradient[1:] += - self.alpha * np.sign(self.w_[1:])
        if self.do_C_alpha:
            gradient[1:] += - self.alpha * np.sign(self.w_[1:]) - 2 * self.w_[1:] * self.C
        return pinv(hessian) @ gradient
       
hlr = HessianBinaryLogisticRegression(eta=0.01,
                                      iterations=10,
                                      C=15,
                                      alpha=10,
                                      do_C_alpha=True) # note that we need only a few iterations here

# hlr.fit(X,y)
# yhat = hlr.predict(X)
# print(hlr)
# print('Accuracy of: ',accuracy_score(y,yhat))
CPU times: user 45 µs, sys: 31 µs, total: 76 µs
Wall time: 88.5 µs
In [33]:
class HessianBinaryLogisticRegressionMulit:
    def __init__(self, eta, iterations=20,line_iters=4,C=0.001,alpha=0.001,do_C=False,do_alpha =False,do_C_alpha=False,do_plot=False):
        self.eta = eta
        self.iters = iterations
        self.line_iters = line_iters
        self.C = C
        self.do_C = do_C
        self.alpha = alpha
        self.do_alpha = do_alpha
        self.do_C_alpha = do_C_alpha
        self.do_plot = do_plot
        # internally we will store the weights as self.w_ to keep with sklearn conventions
    
    def __str__(self):
        if(hasattr(self,'w_')):
            return 'MultiClass Logistic Regression Object with coefficients:\n'+ str(self.w_) # is we have trained the object
        else:
            return 'Untrained MultiClass Logistic Regression Object'
        
    def fit(self,X,y):
        num_samples, num_features = X.shape
        self.unique_ = np.unique(y) # get each unique class value
        num_unique_classes = len(self.unique_)
        self.classifiers_ = [] # will fill this array with binary classifiers
        
        for i,yval in enumerate(self.unique_): # for each unique value
            y_binary = (y==yval) # create a binary problem
            # train the binary classifier for this class
            blr = HessianBinaryLogisticRegression(eta=self.eta,
                                                iterations = self.iters,
                                                # line_iters=self.line_iters,
                                                C=self.C,
                                                alpha=self.alpha,
                                                do_C=self.do_C,
                                                do_alpha=self.do_alpha,
                                                do_C_alpha=self.do_C_alpha,
                                                # do_plot=self.do_plot 
                                                )
            blr.fit(X,y_binary)
            # add the trained classifier to the list
            self.classifiers_.append(blr)
            
        # save all the weights into one matrix, separate column for each class
        self.w_ = np.hstack([x.w_ for x in self.classifiers_]).T
        
    def predict_proba(self,X):
        probs = []
        for blr in self.classifiers_:
            probs.append(blr.predict_proba(X)) # get probability for each classifier
        
        return np.hstack(probs) # make into single matrix
    
    def predict(self,X):
        return self.unique_[np.argmax(self.predict_proba(X),axis=1)] # take argmax along row
    

Wanted to see if the Stochastic decent using Hessian is working.

In [34]:
%%time
acc_ = []
for l in np.linspace(0.01,1,20):
    lslr = HessianBinaryLogisticRegressionMulit(eta=l,
                                        iterations=5, 
                                        # line_iters=5, 
                                        # C=l*0.01,
                                        # alpha=l*0.001,
                                        # do_plot=True,
                                        # do_alpha=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1.to_numpy(),y_train1.to_numpy())
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.linspace(0.01,1,20),acc_)
plt.xlabel('eta')
plt.ylabel('accuracy')
plt.title('eta w/ 5 iterations ')
plt.show()
CPU times: user 14min 40s, sys: 9min 46s, total: 24min 27s
Wall time: 40.9 s

Let's code up the MSE version of the above algorithms.

In [69]:
%%time
# from last time, our logistic regression algorithm is given by (including everything we previously had):
class BinaryLogisticMSERegression:
    def __init__(self, eta, iterations=20, C=0.001,do_C=False,do_alpha =False,alpha=0.001,do_C_alpha=False,sample_weight=False):
        self.eta = eta
        self.iters = iterations
        self.C = C
        self.do_C = do_C
        self.alpha = alpha
        self.do_alpha = do_alpha
        self.do_C_alpha = do_C_alpha
        self.sample_weight = sample_weight
        # internally we will store the weights as self.w_ to keep with sklearn conventions
        
    def __str__(self):
        if(hasattr(self,'w_')):
            return 'Binary Logistic Regression MSE Object with coefficients:\n'+ str(self.w_) # is we have trained the object
        else:
            return 'Untrained Binary Logistic Regression Object'
        
    # convenience, private:
    @staticmethod
    def _add_bias(X):
        return np.hstack((np.ones((X.shape[0],1)),X)) # add bias term
    
    @staticmethod
    def _sigmoid(theta):
        # increase stability, redefine sigmoid operation
        return expit(theta) #1/(1+np.exp(-theta))
    
    @staticmethod
    def get_sample_weight(y):
        sw = [len(y)/(len(np.unique(y))*i) for i in np.bincount(y) ]
        return np.array([sw[j] for j in y])
        

    # vectorized gradient calculation with regularization using L2 Norm and L1 Norm 
    def _get_gradient(self,X,y):
        ydiff = self.predict_proba(X,add_bias=False).ravel() - y # get y difference
        gradient = 2*np.mean(X * ydiff[:,np.newaxis], axis=0) # make ydiff a column vector and multiply through
        
        gradient = gradient.reshape(self.w_.shape)
        if self.do_C:   
            gradient[1:] += -2 * self.w_[1:] * self.C
        if self.do_alpha:
            # from  https://stats.stackexchange.com/questions/45643/why-l1-norm-for-sparse-models
            gradient[1:] += - self.alpha * np.sign(self.w_[1:])
        if self.do_C_alpha:
            gradient[1:] += - self.alpha * np.sign(self.w_[1:]) - 2 * self.w_[1:] * self.C

        return gradient
    
    # public:
    def predict_proba(self,X,add_bias=True):
        # add bias term if requested
        Xb = self._add_bias(X) if add_bias else X
        return Xb @ self.w_ # return the probability y=1
    
    def predict(self,X):
        return (self.predict_proba(X)>0.5) #return the actual prediction
    
    
    def fit(self, X, y):
        Xb = self._add_bias(X) # add bias term
        num_samples, num_features = Xb.shape
        
        self.w_ = np.zeros((num_features,1)) # init weight vector to zeros
        # print(self.get_sample_weight(y))
        
        # for as many as the max iterations
        for _ in range(self.iters):
            gradient = self._get_gradient(Xb,y)
            self.w_ += gradient*self.eta  # multiply by learning rate 
            # self.w_ *= self.get_sample_weight(y) # multiply weight with sample weight
            # add bacause maximizing 

blr = BinaryLogisticMSERegression(eta=0.1,iterations=50,C=0.01,alpha=0.01,do_C_alpha=True)

blr.fit(x_train,y_train)
print(blr)

yhat = blr.predict(x_test)
print('Accuracy of: ',accuracy_score(y_test,yhat))
Binary Logistic Regression MSE Object with coefficients:
[[-5.39663213e+07]
 [-7.13840751e+11]
 [-8.68096531e+10]
 [-9.36785025e+07]
 [-6.91435331e+08]
 [ 3.93790940e+08]]
Accuracy of:  0.4487
CPU times: user 2.19 s, sys: 1.19 s, total: 3.37 s
Wall time: 96.3 ms
In [70]:
%%time
# and we can update this to use a line search along the gradient like this:
from scipy.optimize import minimize_scalar
import copy
from numpy import ma # (masked array) this has most numpy functions that work with NaN data.
class LineSearchLogisticMSERegression(BinaryLogisticMSERegression):
    
    # define custom line search for problem
    def __init__(self, line_iters=0.0,do_plot=False, **kwds):        
        self.line_iters = line_iters
        self.do_plot = do_plot

        # but keep other keywords
        super().__init__(**kwds) # call parent initializer
    
    # this defines the function with the first input to be optimized
    # therefore eta will be optimized, with all inputs constant
    
    # https://stackoverflow.com/questions/21610198/runtimewarning-divide-by-zero-encountered-in-log by Chiraz BenAbdelkader Jul 1, 2020 
    @staticmethod
    def safe_log(x, eps=1e-10):     
        result = np.where(x > eps, x, eps)     
        np.log(result, out=result, where=result > 0)     
        return result        
    
    def objective_function(self,eta,X,y,w,grad,C):
        wnew = w - eta*(grad)
        gi = X @ wnew
        sw = self.get_sample_weight(y) # calculate sample weight of the class
        ydiff = y-gi.ravel()
        mse = np.square(ydiff)
        # loglike = []
        # for i,_y in enumerate(y):
        #     if _y == 1: 
        #         loglike.append(-np.log(gi[i][0]) + self.C*sum(wnew**2))
        #     else:
        #         loglike.append(-np.log(1-gi[i][0]) + self.C*sum(wnew**2))
        # # the line search is looking for minimization, so take the negative of l(w)

        if self.sample_weight == True: return np.sum(mse*sw)
        else: return np.sum(mse)
         # multiply sample weight and cost function 
        # return -np.sum(y*self.safe_log(g)+(1-y)*self.safe_log(1-g))+ C*sum(wnew**2)
    
    def fit(self, X, y):
        Xb = self._add_bias(X) # add bias term
        num_samples, num_features = Xb.shape
        
        self.w_ = np.zeros((num_features,1)) # init weight vector to zeros
        for l in range(int(self.line_iters)):
            # print('line ',l)
            # temp_weight = np.zeros((num_features,1))
            gradient = -self._get_gradient(Xb,y)
            objective_dict={}
            for eta_i in np.linspace(0.00001,1,self.iters):
                # print('eta i ', eta_i)
                # print('objective grad',self.objective_gradient(self.w_,Xb,y,self.C))
                # print('objective function ',self.objective_function(eta_i,Xb,y,self.w_,gradient,self.C))
                # print('grad ', gradient)
                objective_dict[eta_i] = self.objective_function(eta_i,Xb,y,self.w_,gradient,self.C)
            # assert False, print(objective_dict)
            min_eta = min(objective_dict, key=objective_dict.get)
            # print('min_eta',min_eta)
            min_logloss = min(objective_dict.values())
            # print('min_logloss',min_logloss)
            self.w_ -= gradient*min_eta
            if self.do_plot:
                plt.figure()
                plt.plot(objective_dict.keys(),objective_dict.values())
                plt.title(f'line {l} '+ f'min loss likelihood {min_logloss:.2f}, eta : {min_eta:.5f}')
                # plt.text(0.5, 10,f'min loss likelihood {min_logloss:.2f}, eta : {min_eta:.5f}',horizontalalignment='center',verticalalignment='center')
                plt.xlabel('eta')
                plt.ylabel('log likelihood')
                plt.show()
                
            
CPU times: user 32 µs, sys: 25 µs, total: 57 µs
Wall time: 68.4 µs
In [82]:
class LineSearchLogisticMSERegressionMulit:
    def __init__(self, eta, iterations=20,line_iters=4,C=0.001,alpha=0.001,do_C=False,do_alpha =False,do_C_alpha=False,do_plot=False,sample_weight=False):
        self.eta = eta
        self.iters = iterations
        self.line_iters = line_iters
        self.C = C
        self.do_C = do_C
        self.alpha = alpha
        self.do_alpha = do_alpha
        self.do_C_alpha = do_C_alpha
        self.do_plot = do_plot
        self.sample_weight = sample_weight
        # internally we will store the weights as self.w_ to keep with sklearn conventions
    
    def __str__(self):
        if(hasattr(self,'w_')):
            return 'MultiClass Logistic Regression Object with coefficients:\n'+ str(self.w_) # is we have trained the object
        else:
            return 'Untrained MultiClass Logistic Regression Object'
        
    def fit(self,X,y):
        num_samples, num_features = X.shape
        self.unique_ = np.unique(y) # get each unique class value
        num_unique_classes = len(self.unique_)
        self.classifiers_ = [] # will fill this array with binary classifiers
        
        for i,yval in enumerate(self.unique_): # for each unique value
            y_binary = (y==yval) # create a binary problem
            # train the binary classifier for this class
            blr = LineSearchLogisticMSERegression(eta=self.eta,
                                                iterations = self.iters,
                                                line_iters=self.line_iters,
                                                C=self.C,
                                                alpha=self.alpha,
                                                do_C=self.do_C,
                                                do_alpha=self.do_alpha,
                                                do_C_alpha=self.do_C_alpha,
                                                do_plot=self.do_plot )
            blr.fit(X,y_binary)
            # add the trained classifier to the list
            self.classifiers_.append(blr)
            
        # save all the weights into one matrix, separate column for each class
        self.w_ = np.hstack([x.w_ for x in self.classifiers_]).T
        
    def predict_proba(self,X):
        probs = []
        for blr in self.classifiers_:
            probs.append(blr.predict_proba(X)) # get probability for each classifier
        
        return np.hstack(probs) # make into single matrix
    
    def predict(self,X):
        return self.unique_[np.argmax(self.predict_proba(X),axis=1)] # take argmax along row
    
# lr = LineSearchLogisticRegressionMulit(0.1,10)
# print(lr)

Hyperparameter optmizing for MSE Line Search.

In [84]:
%%time
study_name = "LineSearchLogisticMSERegressionsw"  # Unique identifier of the study.
CV_RESULT_DIR = os.getcwd()+f"/{study_name}/"
if not os.path.exists(CV_RESULT_DIR):  os.mkdir(CV_RESULT_DIR)
storage_name = "sqlite:///{}.db".format(study_name)

def objective(trial):
    param = {
        "iterations": 20,
        "line_iters": trial.suggest_int("line_iters", 2, 100, log=True),
        "alpha": trial.suggest_float("alpha", 1e-8, 40., log=True),
        "eta" : 0.1,
        "C": trial.suggest_float("C", 1e-8, 40, log=True),
        "do_C_alpha":True ,
        'sample_weight': True,
            }
    clf = LineSearchLogisticMSERegressionMulit(**param)
    clf.fit(x_train,y_train)
    yhat = clf.predict(x_train1)
    acc = accuracy_score(y_train1,yhat)
    return acc


# pruner = optuna.pruners.MedianPruner(n_warmup_steps=5)
# pruner = optuna.pruners.HyperbandPruner()
study = optuna.create_study(direction="maximize",storage=storage_name,study_name=study_name)
study.optimize(objective, n_trials=50)

print("Best trial:")
trial = study.best_trial

print("  Value: {}".format(trial.value))

print("  Params: ")
for key, value in trial.params.items():
    print("    {}: {}".format(key, value))
[I 2022-10-18 20:57:14,823] A new study created in RDB with name: LineSearchLogisticMSERegressionsw
[I 2022-10-18 20:58:06,986] Trial 0 finished with value: 0.3460625 and parameters: {'line_iters': 41, 'alpha': 2.090167074571667, 'C': 0.0001934912058531394}. Best is trial 0 with value: 0.3460625.
[I 2022-10-18 20:58:17,252] Trial 1 finished with value: 0.0825 and parameters: {'line_iters': 8, 'alpha': 0.02284979424832042, 'C': 0.0021163454802259414}. Best is trial 0 with value: 0.3460625.
[I 2022-10-18 20:59:33,007] Trial 2 finished with value: 0.0819375 and parameters: {'line_iters': 60, 'alpha': 2.2873204314761476e-05, 'C': 7.936286368184512e-07}. Best is trial 0 with value: 0.3460625.
[I 2022-10-18 21:00:09,841] Trial 3 finished with value: 0.0819375 and parameters: {'line_iters': 29, 'alpha': 7.1265192137879215e-06, 'C': 0.08750732267782378}. Best is trial 0 with value: 0.3460625.
[I 2022-10-18 21:00:14,047] Trial 4 finished with value: 0.0818125 and parameters: {'line_iters': 3, 'alpha': 0.00010367648960462196, 'C': 0.27638333448167557}. Best is trial 0 with value: 0.3460625.
[I 2022-10-18 21:00:16,698] Trial 5 finished with value: 0.0818125 and parameters: {'line_iters': 2, 'alpha': 2.7327610751366555e-06, 'C': 3.1227476681929834e-07}. Best is trial 0 with value: 0.3460625.
[I 2022-10-18 21:00:21,887] Trial 6 finished with value: 0.081875 and parameters: {'line_iters': 4, 'alpha': 0.0002089813511621173, 'C': 0.44235487948318997}. Best is trial 0 with value: 0.3460625.
[I 2022-10-18 21:00:39,707] Trial 7 finished with value: 0.0819375 and parameters: {'line_iters': 14, 'alpha': 0.0015759067851025155, 'C': 0.3379912247024309}. Best is trial 0 with value: 0.3460625.
[I 2022-10-18 21:00:44,886] Trial 8 finished with value: 0.0861875 and parameters: {'line_iters': 4, 'alpha': 0.10995782783864855, 'C': 0.09244789231155948}. Best is trial 0 with value: 0.3460625.
[I 2022-10-18 21:01:17,799] Trial 9 finished with value: 0.0881875 and parameters: {'line_iters': 26, 'alpha': 0.12236777483994904, 'C': 5.5127260241046934e-08}. Best is trial 0 with value: 0.3460625.
[I 2022-10-18 21:03:14,123] Trial 10 finished with value: 0.307875 and parameters: {'line_iters': 92, 'alpha': 27.762920028986592, 'C': 2.4865747473624165e-05}. Best is trial 0 with value: 0.3460625.
[I 2022-10-18 21:04:43,034] Trial 11 finished with value: 0.26475 and parameters: {'line_iters': 70, 'alpha': 33.967313008042474, 'C': 3.9155583904726064e-05}. Best is trial 0 with value: 0.3460625.
[I 2022-10-18 21:05:36,224] Trial 12 finished with value: 0.3178125 and parameters: {'line_iters': 42, 'alpha': 28.763507924662818, 'C': 7.689220112843824e-05}. Best is trial 0 with value: 0.3460625.
[I 2022-10-18 21:06:23,151] Trial 13 finished with value: 0.0819375 and parameters: {'line_iters': 37, 'alpha': 1.4893206992973593e-08, 'C': 0.0012324678713312308}. Best is trial 0 with value: 0.3460625.
[I 2022-10-18 21:06:40,916] Trial 14 finished with value: 0.32575 and parameters: {'line_iters': 14, 'alpha': 0.9533841973701049, 'C': 16.274241733836696}. Best is trial 0 with value: 0.3460625.
[I 2022-10-18 21:06:59,060] Trial 15 finished with value: 0.33725 and parameters: {'line_iters': 14, 'alpha': 1.1444376205081555, 'C': 0.01796820925188883}. Best is trial 0 with value: 0.3460625.
[I 2022-10-18 21:07:08,038] Trial 16 finished with value: 0.3404375 and parameters: {'line_iters': 7, 'alpha': 1.8137558089064183, 'C': 0.007595726077135587}. Best is trial 0 with value: 0.3460625.
[I 2022-10-18 21:07:18,247] Trial 17 finished with value: 0.0824375 and parameters: {'line_iters': 8, 'alpha': 0.01683076665188641, 'C': 1.8551872376707211e-06}. Best is trial 0 with value: 0.3460625.
[I 2022-10-18 21:07:28,498] Trial 18 finished with value: 0.3459375 and parameters: {'line_iters': 8, 'alpha': 1.7713194209903143, 'C': 0.0054562332979943805}. Best is trial 0 with value: 0.3460625.
[I 2022-10-18 21:07:51,371] Trial 19 finished with value: 0.0818125 and parameters: {'line_iters': 18, 'alpha': 3.3411023466890564e-07, 'C': 0.00020316822369768032}. Best is trial 0 with value: 0.3460625.
[I 2022-10-18 21:07:59,225] Trial 20 finished with value: 0.0819375 and parameters: {'line_iters': 6, 'alpha': 0.002176305209411556, 'C': 4.983970803746371}. Best is trial 0 with value: 0.3460625.
[I 2022-10-18 21:08:10,806] Trial 21 finished with value: 0.34625 and parameters: {'line_iters': 9, 'alpha': 2.053681176083378, 'C': 0.006725284988192043}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:08:37,609] Trial 22 finished with value: 0.1733125 and parameters: {'line_iters': 21, 'alpha': 2.569418965143051, 'C': 0.0004066689488375359}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:08:51,702] Trial 23 finished with value: 0.0890625 and parameters: {'line_iters': 11, 'alpha': 0.15290351476655578, 'C': 0.013638350841416444}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:09:04,507] Trial 24 finished with value: 0.21675 and parameters: {'line_iters': 10, 'alpha': 4.0355512867094845, 'C': 7.35466085782412e-06}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:09:11,029] Trial 25 finished with value: 0.082125 and parameters: {'line_iters': 5, 'alpha': 0.011273021818451055, 'C': 0.001833802674914154}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:09:13,703] Trial 26 finished with value: 0.09825 and parameters: {'line_iters': 2, 'alpha': 0.35787466766033815, 'C': 0.00024208641044507482}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:09:36,601] Trial 27 finished with value: 0.213875 and parameters: {'line_iters': 18, 'alpha': 7.2002400604048775, 'C': 1.7029692218753603}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:09:40,545] Trial 28 finished with value: 0.083 and parameters: {'line_iters': 3, 'alpha': 0.04291418775965811, 'C': 0.005584731795460303}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:09:52,058] Trial 29 finished with value: 0.093625 and parameters: {'line_iters': 9, 'alpha': 0.4376076299458209, 'C': 0.03272731919831139}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:10:48,062] Trial 30 finished with value: 0.082125 and parameters: {'line_iters': 44, 'alpha': 0.005629021158349068, 'C': 7.383553746932853e-06}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:10:55,796] Trial 31 finished with value: 0.3374375 and parameters: {'line_iters': 6, 'alpha': 6.664854269352857, 'C': 0.0038836286217724713}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:11:03,518] Trial 32 finished with value: 0.346 and parameters: {'line_iters': 6, 'alpha': 1.7554170571270569, 'C': 0.0022072503023338524}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:11:13,759] Trial 33 finished with value: 0.3384375 and parameters: {'line_iters': 8, 'alpha': 10.699284142483256, 'C': 0.0009967185940744018}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:11:20,218] Trial 34 finished with value: 0.09325 and parameters: {'line_iters': 5, 'alpha': 0.3830693178443298, 'C': 0.0720219733987934}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:11:24,165] Trial 35 finished with value: 0.0834375 and parameters: {'line_iters': 3, 'alpha': 0.06539508156143121, 'C': 0.000693562857640936}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:11:38,194] Trial 36 finished with value: 0.083875 and parameters: {'line_iters': 11, 'alpha': 0.4780670106764139, 'C': 0.00010878269111206886}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:12:12,505] Trial 37 finished with value: 0.0819375 and parameters: {'line_iters': 27, 'alpha': 0.00032912751224882297, 'C': 0.0017264175836971091}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:12:17,716] Trial 38 finished with value: 0.0818125 and parameters: {'line_iters': 4, 'alpha': 8.113573763239158e-05, 'C': 0.05277221176067544}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:13:34,908] Trial 39 finished with value: 0.1289375 and parameters: {'line_iters': 61, 'alpha': 9.149752538794196, 'C': 0.15208039313643995}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:13:42,647] Trial 40 finished with value: 0.08275 and parameters: {'line_iters': 6, 'alpha': 0.030284996912341277, 'C': 0.492036408567616}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:13:51,747] Trial 41 finished with value: 0.3450625 and parameters: {'line_iters': 7, 'alpha': 1.6655305258193338, 'C': 0.00769693799180124}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:14:00,749] Trial 42 finished with value: 0.3404375 and parameters: {'line_iters': 7, 'alpha': 1.8324170173933114, 'C': 0.0044369618383117366}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:14:07,302] Trial 43 finished with value: 0.08875 and parameters: {'line_iters': 5, 'alpha': 0.19272844206846626, 'C': 0.01972455341136782}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:14:22,649] Trial 44 finished with value: 0.3315625 and parameters: {'line_iters': 12, 'alpha': 1.0208509562341068, 'C': 0.0004481706773280341}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:14:34,598] Trial 45 finished with value: 0.081875 and parameters: {'line_iters': 9, 'alpha': 25.172918998601354, 'C': 4.0829176055907135e-05}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:14:38,522] Trial 46 finished with value: 0.0818125 and parameters: {'line_iters': 3, 'alpha': 5.9492211081149445e-06, 'C': 0.010133572268847925}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:14:43,716] Trial 47 finished with value: 0.3363125 and parameters: {'line_iters': 4, 'alpha': 4.1749709727411055, 'C': 1.3946798470990044e-05}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:16:50,387] Trial 48 finished with value: 0.3005625 and parameters: {'line_iters': 100, 'alpha': 14.556699973211146, 'C': 1.6867176724977218e-08}. Best is trial 21 with value: 0.34625.
[I 2022-10-18 21:17:08,219] Trial 49 finished with value: 0.313625 and parameters: {'line_iters': 14, 'alpha': 0.8445245962086085, 'C': 0.002756421404997907}. Best is trial 21 with value: 0.34625.
Best trial:
  Value: 0.34625
  Params: 
    C: 0.006725284988192043
    alpha: 2.053681176083378
    line_iters: 9
CPU times: user 6h 42min 40s, sys: 5h 12min 16s, total: 11h 54min 56s
Wall time: 19min 53s

Not sure why this is not performing well. I will not plot the test vs train but the accuracy is so terrible.

I was optiminzing MSE Linesearch by hand, it do seem to perform much better than the loglikelihood.

In [159]:
%%time
acc_ = []
for l in range(1,10):
    lslr = LineSearchLogisticMSERegressionMulit(eta=0.1,
                                        iterations=20, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        line_iters=50, 
                                        # C=l*0.01,
                                        alpha=l*0.001,
    #                                     do_plot=True,
                                        do_alpha=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1,y_train1)
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.arange(1,10)*0.001,acc_)
plt.xlabel('alpha')
plt.ylabel('accuracy')
plt.show()
CPU times: user 26min 16s, sys: 20min 59s, total: 47min 15s
Wall time: 1min 18s
In [199]:
acc_ = []
for l in range(1,10):
    lslr = LineSearchLogisticMSERegressionMulit(eta=0.1,
                                        iterations=10, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        line_iters=50, 
                                        # C=l*0.01,
                                        alpha=l*0.0001,
    #                                     do_plot=True,
                                        do_alpha=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1,y_train1)
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.arange(1,10)*0.0001,acc_)
plt.xlabel('alpha')
plt.ylabel('accuracy')
plt.show()
In [205]:
%%time
acc_ = []
for l in range(1,10):
    lslr = LineSearchLogisticMSERegressionMulit(eta=0.1,
                                        iterations=10, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        line_iters=50, 
                                        # C=l*0.01,
                                        alpha=l*0.00001,
    #                                     do_plot=True,
                                        do_alpha=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1,y_train1)
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.arange(1,10)*0.00001,acc_)
plt.xlabel('alpha')
plt.ylabel('accuracy')
plt.show()
CPU times: user 2min 16s, sys: 1.9 s, total: 2min 18s
Wall time: 23.6 s
In [201]:
acc_ = []
for l in range(1,10):
    lslr = LineSearchLogisticMSERegressionMulit(eta=0.1,
                                        iterations=20, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        line_iters=50, 
                                        # C=l*0.01,
                                        alpha=l*0.1,
    #                                     do_plot=True,
                                        do_alpha=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1,y_train1)
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.arange(1,10)*0.1,acc_)
plt.xlabel('alpha')
plt.ylabel('accuracy')
plt.show()
In [198]:
acc_ = []
for l in range(1,10):
    lslr = LineSearchLogisticMSERegressionMulit(eta=0.1,
                                        iterations=20, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        line_iters=50, 
                                        # C=l*0.01,
                                        alpha=l*1,
    #                                     do_plot=True,
                                        do_alpha=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1,y_train1)
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.arange(1,10)*1,acc_)
plt.xlabel('alpha')
plt.ylabel('accuracy')
plt.show()
In [197]:
acc_ = []
for l in range(1,10):
    lslr = LineSearchLogisticMSERegressionMulit(eta=0.1,
                                        iterations=20, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        line_iters=50, 
                                        C=l*0.01,
                                        # alpha=4,
    #                                     do_plot=True,
                                        do_C=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1,y_train1)
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.arange(1,10)*0.01,acc_)
plt.xlabel('C')
plt.ylabel('accuracy')
plt.show()
In [202]:
acc_ = []
for l in range(1,10):
    lslr = LineSearchLogisticMSERegressionMulit(eta=0.1,
                                        iterations=20, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        line_iters=50, 
                                        C=l*0.001,
                                        # alpha=4,
    #                                     do_plot=True,
                                        do_C=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1,y_train1)
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.arange(1,10)*0.001,acc_)
plt.xlabel('C')
plt.ylabel('accuracy')
plt.show()
In [203]:
acc_ = []
for l in range(1,10):
    lslr = LineSearchLogisticMSERegressionMulit(eta=0.1,
                                        iterations=20, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        line_iters=50, 
                                        C=l*0.0001,
                                        # alpha=4,
    #                                     do_plot=True,
                                        do_C=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1,y_train1)
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.arange(1,10)*0.0001,acc_)
plt.xlabel('C')
plt.ylabel('accuracy')
plt.show()
In [195]:
acc_ = []
for l in range(1,10):
    lslr = LineSearchLogisticMSERegressionMulit(eta=0.1,
                                        iterations=10, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        line_iters=50, 
                                        C=l*0.1,
                                        # alpha=4,
    #                                     do_plot=True,
                                        do_C=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1,y_train1)
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.arange(1,10)*0.1,acc_)
plt.xlabel('C')
plt.ylabel('accuracy')
plt.show()
In [194]:
acc_ = []
for l in range(1,10):
    lslr = LineSearchLogisticMSERegressionMulit(eta=0.1,
                                        iterations=10, # this is important because it is how many uniformaly distrubited eta bins are tested so, eta_range = np.linspace(0,1,iterations) 
                                        line_iters=50, 
                                        C=l*1,
                                        # alpha=4,
    #                                     do_plot=True,
                                        do_C=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1,y_train1)
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.arange(1,10)*1,acc_)
plt.xlabel('C')
plt.ylabel('accuracy')
plt.show()

MSE version of Stochastic decent

In [116]:
%%time
class StochasticLogisticMSERegression(BinaryLogisticMSERegression):
    # stochastic gradient calculation 
    def _get_gradient(self,X,y):
        idx = int(np.random.rand()*len(y)) # grab random instance
        ydiff = y[idx]-self.predict_proba(X[idx],add_bias=False).ravel() # get y difference (now scalar)
        gradient = 2*X[idx] * ydiff[:,np.newaxis] # make ydiff a column vector and multiply through

        gradient = gradient.reshape(self.w_.shape)
        # gradient[1:] += -2 * self.w_[1:] * self.C
        if self.do_C:   
            gradient[1:] += -2 * self.w_[1:] * self.C
        if self.do_alpha:
            # from  https://stats.stackexchange.com/questions/45643/why-l1-norm-for-sparse-models
            gradient[1:] += - self.alpha * np.sign(self.w_[1:])
        if self.do_C_alpha:
            gradient[1:] += - self.alpha * np.sign(self.w_[1:]) - 2 * self.w_[1:] * self.C
        return gradient
    
    
slr = StochasticLogisticMSERegression(eta=0.01, iterations=1200, C=0.001,alpha=0.001,do_C_alpha=True) # take a lot more steps!!
 
slr.fit(X,y)

yhat = slr.predict(X)
# print(slr)
print('Accuracy of: ',accuracy_score(y,yhat)) 
Accuracy of:  0.28043
CPU times: user 195 ms, sys: 49.3 ms, total: 244 ms
Wall time: 69.1 ms
In [117]:
class StochasticLogisticMSERegressionMulit:
    def __init__(self, eta, iterations=20,line_iters=4,C=0.001,alpha=0.001,do_C=False,do_alpha =False,do_C_alpha=False,do_plot=False):
        self.eta = eta
        self.iters = iterations
        self.line_iters = line_iters
        self.C = C
        self.do_C = do_C
        self.alpha = alpha
        self.do_alpha = do_alpha
        self.do_C_alpha = do_C_alpha
        self.do_plot = do_plot
        # internally we will store the weights as self.w_ to keep with sklearn conventions
    
    def __str__(self):
        if(hasattr(self,'w_')):
            return 'MultiClass Logistic Regression Object with coefficients:\n'+ str(self.w_) # is we have trained the object
        else:
            return 'Untrained MultiClass Logistic Regression Object'
        
    def fit(self,X,y):
        num_samples, num_features = X.shape
        self.unique_ = np.unique(y) # get each unique class value
        num_unique_classes = len(self.unique_)
        self.classifiers_ = [] # will fill this array with binary classifiers
        
        for i,yval in enumerate(self.unique_): # for each unique value
            y_binary = (y==yval) # create a binary problem
            # train the binary classifier for this class
            blr = StochasticLogisticMSERegression(eta=self.eta,
                                                iterations = self.iters,
                                                # line_iters=self.line_iters,
                                                C=self.C,
                                                alpha=self.alpha,
                                                do_C=self.do_C,
                                                do_alpha=self.do_alpha,
                                                do_C_alpha=self.do_C_alpha)
                                                # do_plot=self.do_plot )
            blr.fit(X,y_binary)
            # add the trained classifier to the list
            self.classifiers_.append(blr)
            
        # save all the weights into one matrix, separate column for each class
        self.w_ = np.hstack([x.w_ for x in self.classifiers_]).T
        
    def predict_proba(self,X):
        probs = []
        for blr in self.classifiers_:
            probs.append(blr.predict_proba(X)) # get probability for each classifier
        
        return np.hstack(probs) # make into single matrix
    
    def predict(self,X):
        return self.unique_[np.argmax(self.predict_proba(X),axis=1)] # take argmax along row
    
# lr = LineSearchLogisticRegressionMulit(0.1,10)
# print(lr)
In [48]:
%%time
acc_ = []
for l in np.linspace(0.001,0.2,40):
    lslr = StochasticLogisticMSERegressionMulit(eta=l,
                                        iterations=5000, 
                                        # line_iters=5, 
                                        # C=l*0.01,
                                        # alpha=l*0.001,
    #                                     do_plot=True,
                                        # do_alpha=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1.to_numpy(),y_train1.to_numpy())
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.linspace(0.001,1,40),acc_)
plt.xlabel('eta')
plt.ylabel('accuracy')
plt.title('eta w/ 5000 iterations ')
plt.show()
CPU times: user 1min 48s, sys: 1min 21s, total: 3min 9s
Wall time: 11.6 s
In [346]:
%%time
acc_ = []
for l in np.linspace(0.001,0.1,40):
    lslr = StochasticLogisticMSERegressionMulit(eta=l,
                                        iterations=5000, 
                                        # line_iters=5, 
                                        # C=l*0.01,
                                        # alpha=l*0.001,
    #                                     do_plot=True,
                                        # do_alpha=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train1.to_numpy(),y_train1.to_numpy())
    yhat = lslr.predict(x_test1)
    acc_.append(accuracy_score(y_test1,yhat))

plt.figure()
plt.plot(np.linspace(0.001,0.1,40),acc_)
plt.xlabel('eta')
plt.ylabel('accuracy')
plt.title('eta w/ 5000 iterations ')
plt.show()
CPU times: user 10.7 s, sys: 73.1 ms, total: 10.8 s
Wall time: 10.9 s
In [44]:
%%time
from numpy.linalg import pinv
class HessianBinaryLogisticMSERegression(BinaryLogisticMSERegression):
    # just overwrite gradient function
    def _get_gradient(self,X,y):
        g = self.predict_proba(X,add_bias=False).ravel() # get sigmoid value for all classes
        if not self.do_C and not self.do_alpha and not self.do_C_alpha:
            hessian = X.T @ X  # calculate the hessian
        if self.do_C:   
            hessian = X.T @ X - 2 * self.C # calculate the hessian
        if self.do_alpha:
            hessian = X.T @ X -  self.alpha/np.linalg.norm(self.w_[1:])
        if self.do_C_alpha:
            hessian = X.T @ X -  2 * self.C  -  self.alpha/np.linalg.norm(self.w_[1:])

        ydiff = y-g # get y difference
        gradient = 2 * np.sum(X * ydiff[:,np.newaxis], axis=0) # make ydiff a column vector and multiply through
        gradient = gradient.reshape(self.w_.shape)
        # gradient[1:] += -2 * self.w_[1:] * self.C
        if self.do_C:   
            gradient[1:] -= 2 * self.w_[1:] * self.C
        if self.do_alpha:
            gradient[1:] -=  self.alpha * np.sign(self.w_[1:])
        if self.do_C_alpha:
            gradient[1:] -=  self.alpha * np.sign(self.w_[1:]) + 2 * self.w_[1:] * self.C
        return pinv(hessian) @ gradient
       
hlr = HessianBinaryLogisticMSERegression(eta=0.1,
                                      iterations=20,
                                    #   C=15,
                                    #   alpha=10,
                                    #   do_C_alpha=True
                                      ) # note that we need only a few iterations here

hlr.fit(X,y)
yhat = hlr.predict(X)
print(hlr)
print('Accuracy of: ',accuracy_score(y,yhat))
Binary Logistic Regression MSE Object with coefficients:
[[ 6.14324708e-01]
 [-1.16176697e-02]
 [ 1.53036981e-02]
 [-5.77885843e-01]
 [-9.28423187e+00]
 [ 2.38823158e-02]
 [ 4.42127058e-02]
 [ 9.85213166e+00]
 [-6.21624874e+02]
 [-1.65238505e-01]
 [ 6.21311852e+02]
 [ 3.43721174e-01]]
Accuracy of:  0.53135
CPU times: user 2.63 s, sys: 1.69 s, total: 4.31 s
Wall time: 125 ms
In [45]:
class HessianBinaryLogisticMSERegressionMulit:
    def __init__(self, eta, iterations=20,line_iters=4,C=0.001,alpha=0.001,do_C=False,do_alpha =False,do_C_alpha=False,do_plot=False):
        self.eta = eta
        self.iters = iterations
        self.line_iters = line_iters
        self.C = C
        self.do_C = do_C
        self.alpha = alpha
        self.do_alpha = do_alpha
        self.do_C_alpha = do_C_alpha
        self.do_plot = do_plot
        # internally we will store the weights as self.w_ to keep with sklearn conventions
    
    def __str__(self):
        if(hasattr(self,'w_')):
            return 'MultiClass Logistic Regression Object with coefficients:\n'+ str(self.w_) # is we have trained the object
        else:
            return 'Untrained MultiClass Logistic Regression Object'
        
    def fit(self,X,y):
        num_samples, num_features = X.shape
        self.unique_ = np.unique(y) # get each unique class value
        num_unique_classes = len(self.unique_)
        self.classifiers_ = [] # will fill this array with binary classifiers
        
        for i,yval in enumerate(self.unique_): # for each unique value
            y_binary = (y==yval) # create a binary problem
            # train the binary classifier for this class
            blr = HessianBinaryLogisticMSERegression(eta=self.eta,
                                                iterations = self.iters,
                                                # line_iters=self.line_iters,
                                                C=self.C,
                                                alpha=self.alpha,
                                                do_C=self.do_C,
                                                do_alpha=self.do_alpha,
                                                do_C_alpha=self.do_C_alpha)
                                                # do_plot=self.do_plot )
            blr.fit(X,y_binary)
            # add the trained classifier to the list
            self.classifiers_.append(blr)
            
        # save all the weights into one matrix, separate column for each class
        self.w_ = np.hstack([x.w_ for x in self.classifiers_]).T
        
    def predict_proba(self,X):
        probs = []
        for blr in self.classifiers_:
            probs.append(blr.predict_proba(X)) # get probability for each classifier
        
        return np.hstack(probs) # make into single matrix
    
    def predict(self,X):
        return self.unique_[np.argmax(self.predict_proba(X),axis=1)] # take argmax along row
    
# lr = LineSearchLogisticRegressionMulit(0.1,10)
# print(lr)
In [51]:
%%time
acc_ = []
for l in np.linspace(0.001,1,50):
    lslr = StochasticLogisticMSERegressionMulit(eta=l,
                                        iterations=5, 
                                        # line_iters=5, 
                                        # C=l*0.01,
                                        # alpha=l*0.001,
    #                                     do_plot=True,
                                        # do_alpha=True,
                                        # do_plot=True,
                                        # sample_weight=True
                                        )
    lslr.fit(x_train.to_numpy(),y_train.to_numpy())
    yhat = lslr.predict(x_test)
    acc_.append(accuracy_score(y_test,yhat))

plt.figure()
plt.plot(np.linspace(0.001,1,50),acc_)
plt.xlabel('eta')
plt.ylabel('accuracy')
plt.title('eta w/ 30 iterations ')
plt.show()
CPU times: user 7.94 s, sys: 7.48 s, total: 15.4 s
Wall time: 524 ms

Now let's see the Stochastic Decent with log likelihood.

In [142]:
%%time
study_name = "StochasticLogisticRegressionMulit"  # Unique identifier of the study.
CV_RESULT_DIR = os.getcwd()+f"/{study_name}/"
if not os.path.exists(CV_RESULT_DIR):  os.mkdir(CV_RESULT_DIR)
storage_name = "sqlite:///{}.db".format(study_name)

def objective(trial):
    param = {
        "iterations": trial.suggest_int("iterations", 50, 5000, log=True),
        "alpha": trial.suggest_float("alpha", 1e-8, 10., log=True),
        "eta" : trial.suggest_float("eta", 1e-8, 1.0, log=True),
        "C": trial.suggest_float("C", 1e-8, 10, log=True),
        "do_C_alpha": True
            }
    clf = StochasticLogisticRegressionMulit(**param)
    clf.fit(x_train.to_numpy(),y_train.to_numpy())
    yhat = clf.predict(x_train1)
    acc = accuracy_score(y_train1,yhat)
    return acc


# pruner = optuna.pruners.MedianPruner(n_warmup_steps=5)
# pruner = optuna.pruners.HyperbandPruner()
study = optuna.create_study(direction="maximize",storage=storage_name,study_name=study_name)
study.optimize(objective, n_trials=400)

print("Best trial:")
trial = study.best_trial

print("  Value: {}".format(trial.value))

print("  Params: ")
for key, value in trial.params.items():
    print("    {}: {}".format(key, value))
[I 2022-10-17 06:38:37,838] A new study created in RDB with name: StochasticLogisticRegressionMulit
[I 2022-10-17 06:38:38,019] Trial 0 finished with value: 0.51625 and parameters: {'iterations': 111, 'alpha': 0.02863979714045214, 'eta': 3.899369568115109e-08, 'C': 1.809627474289929e-05}. Best is trial 0 with value: 0.51625.
[I 2022-10-17 06:38:38,198] Trial 1 finished with value: 0.5915 and parameters: {'iterations': 439, 'alpha': 1.1716617873699436e-08, 'eta': 0.8720443229447021, 'C': 3.9491283153088927}. Best is trial 1 with value: 0.5915.
[I 2022-10-17 06:38:38,360] Trial 2 finished with value: 0.654625 and parameters: {'iterations': 335, 'alpha': 0.02916707624263515, 'eta': 0.008234500024589558, 'C': 9.348180879182654e-06}. Best is trial 2 with value: 0.654625.
[I 2022-10-17 06:38:38,522] Trial 3 finished with value: 0.609875 and parameters: {'iterations': 342, 'alpha': 1.1268396593757854e-07, 'eta': 6.7368461486106715e-06, 'C': 4.4603500132658606e-07}. Best is trial 2 with value: 0.654625.
[I 2022-10-17 06:38:38,779] Trial 4 finished with value: 0.67825 and parameters: {'iterations': 2057, 'alpha': 0.0014522175533630264, 'eta': 0.03237302600471411, 'C': 1.2814882193401929e-05}. Best is trial 4 with value: 0.67825.
[I 2022-10-17 06:38:38,946] Trial 5 finished with value: 0.5915 and parameters: {'iterations': 412, 'alpha': 9.073693656352456, 'eta': 1.6136530058272125e-08, 'C': 0.5315229615397077}. Best is trial 4 with value: 0.67825.
[I 2022-10-17 06:38:39,125] Trial 6 finished with value: 0.60475 and parameters: {'iterations': 556, 'alpha': 0.10296419258123793, 'eta': 4.229265243571795e-06, 'C': 7.51220745783786e-06}. Best is trial 4 with value: 0.67825.
[I 2022-10-17 06:38:39,296] Trial 7 finished with value: 0.56275 and parameters: {'iterations': 424, 'alpha': 4.56573726687944e-08, 'eta': 2.0600872063511077e-05, 'C': 0.0026520210601201546}. Best is trial 4 with value: 0.67825.
[I 2022-10-17 06:38:39,559] Trial 8 finished with value: 0.5915 and parameters: {'iterations': 2159, 'alpha': 0.24962341588104542, 'eta': 0.0594906400241892, 'C': 0.011743504074354457}. Best is trial 4 with value: 0.67825.
[I 2022-10-17 06:38:39,747] Trial 9 finished with value: 0.5915 and parameters: {'iterations': 782, 'alpha': 6.074781056034103e-05, 'eta': 0.6286299469117201, 'C': 3.466996824236259}. Best is trial 4 with value: 0.67825.
[I 2022-10-17 06:38:40,120] Trial 10 finished with value: 0.68975 and parameters: {'iterations': 3900, 'alpha': 3.4735692257633275e-05, 'eta': 0.0012815714255430411, 'C': 2.3281031245508382e-08}. Best is trial 10 with value: 0.68975.
[I 2022-10-17 06:38:40,453] Trial 11 finished with value: 0.67925 and parameters: {'iterations': 3224, 'alpha': 5.4069406252980475e-05, 'eta': 0.001204888726459852, 'C': 1.3564326790385163e-08}. Best is trial 10 with value: 0.68975.
[I 2022-10-17 06:38:40,879] Trial 12 finished with value: 0.66475 and parameters: {'iterations': 4861, 'alpha': 7.405582079960764e-06, 'eta': 0.0007646037670209228, 'C': 2.098584179025371e-08}. Best is trial 10 with value: 0.68975.
[I 2022-10-17 06:38:41,295] Trial 13 finished with value: 0.65125 and parameters: {'iterations': 4678, 'alpha': 1.7323636269642616e-06, 'eta': 0.0005313214678404339, 'C': 1.083583093453023e-08}. Best is trial 10 with value: 0.68975.
[I 2022-10-17 06:38:41,548] Trial 14 finished with value: 0.609125 and parameters: {'iterations': 1747, 'alpha': 0.0005633498872900043, 'eta': 0.00044357714663093256, 'C': 2.601689880794903e-07}. Best is trial 10 with value: 0.68975.
[I 2022-10-17 06:38:41,773] Trial 15 finished with value: 0.649125 and parameters: {'iterations': 1137, 'alpha': 1.0756061746388367e-05, 'eta': 0.003670875501857405, 'C': 2.705334341865096e-07}. Best is trial 10 with value: 0.68975.
[I 2022-10-17 06:38:41,936] Trial 16 finished with value: 0.524125 and parameters: {'iterations': 67, 'alpha': 7.248263612541788e-07, 'eta': 3.92523666669134e-07, 'C': 0.00029111879767856766}. Best is trial 10 with value: 0.68975.
[I 2022-10-17 06:38:42,287] Trial 17 finished with value: 0.5925 and parameters: {'iterations': 3194, 'alpha': 0.00010277418030784144, 'eta': 6.538414010739886e-05, 'C': 8.291045092866851e-08}. Best is trial 10 with value: 0.68975.
[I 2022-10-17 06:38:42,516] Trial 18 finished with value: 0.69 and parameters: {'iterations': 1167, 'alpha': 0.00446288674145717, 'eta': 0.004827099201717337, 'C': 1.0391730804603327e-06}. Best is trial 18 with value: 0.69.
[I 2022-10-17 06:38:42,739] Trial 19 finished with value: 0.7005 and parameters: {'iterations': 1152, 'alpha': 0.0024877704186498885, 'eta': 0.06433274631669088, 'C': 2.024872235995638e-06}. Best is trial 19 with value: 0.7005.
[I 2022-10-17 06:38:42,908] Trial 20 finished with value: 0.662125 and parameters: {'iterations': 192, 'alpha': 0.0017056829065492613, 'eta': 0.10321350139124026, 'C': 0.00022290129963097377}. Best is trial 19 with value: 0.7005.
[I 2022-10-17 06:38:43,130] Trial 21 finished with value: 0.68775 and parameters: {'iterations': 1055, 'alpha': 0.007368476308437518, 'eta': 0.01257842692695096, 'C': 1.0532274371657333e-06}. Best is trial 19 with value: 0.7005.
[I 2022-10-17 06:38:43,364] Trial 22 finished with value: 0.5915 and parameters: {'iterations': 1348, 'alpha': 1.0638526922951883, 'eta': 0.15459441578562325, 'C': 1.6032113167467332e-06}. Best is trial 19 with value: 0.7005.
[I 2022-10-17 06:38:43,564] Trial 23 finished with value: 0.628625 and parameters: {'iterations': 699, 'alpha': 0.005627987702604963, 'eta': 0.003231351378975321, 'C': 7.0313998928582e-08}. Best is trial 19 with value: 0.7005.
[I 2022-10-17 06:38:43,879] Trial 24 finished with value: 0.599 and parameters: {'iterations': 2761, 'alpha': 0.0001763567466017648, 'eta': 0.000158905437623272, 'C': 5.901280559586771e-05}. Best is trial 19 with value: 0.7005.
[I 2022-10-17 06:38:44,112] Trial 25 finished with value: 0.694625 and parameters: {'iterations': 1345, 'alpha': 1.2866317306592432e-05, 'eta': 0.011591882262466598, 'C': 2.085875606009695e-06}. Best is trial 19 with value: 0.7005.
[I 2022-10-17 06:38:44,321] Trial 26 finished with value: 0.684 and parameters: {'iterations': 795, 'alpha': 0.0007850191870204544, 'eta': 0.015057573359615268, 'C': 2.2664663525873065e-06}. Best is trial 19 with value: 0.7005.
[I 2022-10-17 06:38:44,570] Trial 27 finished with value: 0.67875 and parameters: {'iterations': 1604, 'alpha': 0.006857968043278075, 'eta': 0.2636872714449968, 'C': 0.0026295205028997798}. Best is trial 19 with value: 0.7005.
[I 2022-10-17 06:38:44,787] Trial 28 finished with value: 0.676625 and parameters: {'iterations': 1008, 'alpha': 1.7436702570726557e-06, 'eta': 0.03969307660875805, 'C': 4.682570905052306e-05}. Best is trial 19 with value: 0.7005.
[I 2022-10-17 06:38:44,965] Trial 29 finished with value: 0.6455 and parameters: {'iterations': 215, 'alpha': 0.05413740281232821, 'eta': 0.004279827709214193, 'C': 3.4854781026926945e-06}. Best is trial 19 with value: 0.7005.
[I 2022-10-17 06:38:45,168] Trial 30 finished with value: 0.5915 and parameters: {'iterations': 621, 'alpha': 0.47457994696824446, 'eta': 0.00011892027249905151, 'C': 5.101813962000109e-05}. Best is trial 19 with value: 0.7005.
[I 2022-10-17 06:38:45,467] Trial 31 finished with value: 0.668 and parameters: {'iterations': 2517, 'alpha': 1.3126501581304932e-05, 'eta': 0.0012616333797855618, 'C': 5.80270174527204e-08}. Best is trial 19 with value: 0.7005.
[I 2022-10-17 06:38:45,712] Trial 32 finished with value: 0.698875 and parameters: {'iterations': 1526, 'alpha': 0.0003187338320477397, 'eta': 0.01464815865601519, 'C': 5.523477729329186e-07}. Best is trial 19 with value: 0.7005.
[I 2022-10-17 06:38:45,953] Trial 33 finished with value: 0.655375 and parameters: {'iterations': 1417, 'alpha': 0.009569194673638289, 'eta': 0.4400699671827969, 'C': 7.296953672799257e-07}. Best is trial 19 with value: 0.7005.
[I 2022-10-17 06:38:46,164] Trial 34 finished with value: 0.695125 and parameters: {'iterations': 883, 'alpha': 0.00034102040478788993, 'eta': 0.02506787796698285, 'C': 2.1284930129058911e-07}. Best is trial 19 with value: 0.7005.
[I 2022-10-17 06:38:46,370] Trial 35 finished with value: 0.686375 and parameters: {'iterations': 841, 'alpha': 0.00028408885496866444, 'eta': 0.025154823746267707, 'C': 1.8167351275883664e-07}. Best is trial 19 with value: 0.7005.
[I 2022-10-17 06:38:46,646] Trial 36 finished with value: 0.643 and parameters: {'iterations': 2099, 'alpha': 0.0002734160623430151, 'eta': 0.09066438985422183, 'C': 5.285433880627557e-06}. Best is trial 19 with value: 0.7005.
[I 2022-10-17 06:38:46,823] Trial 37 finished with value: 0.700625 and parameters: {'iterations': 252, 'alpha': 0.0023445190326580544, 'eta': 0.19260304246954918, 'C': 9.847055983507344e-06}. Best is trial 37 with value: 0.700625.
[I 2022-10-17 06:38:47,001] Trial 38 finished with value: 0.544375 and parameters: {'iterations': 277, 'alpha': 0.026351100748165327, 'eta': 0.8396336857820612, 'C': 1.837965070297162e-05}. Best is trial 37 with value: 0.700625.
[I 2022-10-17 06:38:47,169] Trial 39 finished with value: 0.599125 and parameters: {'iterations': 93, 'alpha': 0.0005874993032898634, 'eta': 0.2632833425689531, 'C': 1.1325469937861737e-05}. Best is trial 37 with value: 0.700625.
[I 2022-10-17 06:38:47,337] Trial 40 finished with value: 0.69675 and parameters: {'iterations': 133, 'alpha': 0.0014122936168327997, 'eta': 0.038676935092131215, 'C': 3.1789838800379744e-07}. Best is trial 37 with value: 0.700625.
[I 2022-10-17 06:38:47,509] Trial 41 finished with value: 0.680625 and parameters: {'iterations': 121, 'alpha': 0.002240195313488808, 'eta': 0.05661025183866213, 'C': 3.94433324649823e-07}. Best is trial 37 with value: 0.700625.
[I 2022-10-17 06:38:47,678] Trial 42 finished with value: 0.65625 and parameters: {'iterations': 172, 'alpha': 0.022973069441852473, 'eta': 0.030689260881358923, 'C': 1.069336197849754e-07}. Best is trial 37 with value: 0.700625.
[I 2022-10-17 06:38:47,848] Trial 43 finished with value: 0.71025 and parameters: {'iterations': 144, 'alpha': 0.0014894956638672118, 'eta': 0.16542023327557942, 'C': 4.122397191667315e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:48,021] Trial 44 finished with value: 0.6835 and parameters: {'iterations': 104, 'alpha': 0.0017966470658294766, 'eta': 0.19437401899777626, 'C': 4.122183702834442e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:48,194] Trial 45 finished with value: 0.4945 and parameters: {'iterations': 128, 'alpha': 0.09364731116733623, 'eta': 0.40481459957691834, 'C': 5.425509395915921e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:48,372] Trial 46 finished with value: 0.5935 and parameters: {'iterations': 257, 'alpha': 0.016513036218697034, 'eta': 0.8256901598824, 'C': 2.347402043664122e-05}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:48,556] Trial 47 finished with value: 0.7055 and parameters: {'iterations': 384, 'alpha': 0.003077180115503635, 'eta': 0.11451681067152651, 'C': 3.099169615710129e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:48,749] Trial 48 finished with value: 0.668 and parameters: {'iterations': 501, 'alpha': 0.00010896645075734497, 'eta': 0.1399819670049653, 'C': 3.251989944572085e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:48,930] Trial 49 finished with value: 0.587625 and parameters: {'iterations': 337, 'alpha': 3.482810700726492e-05, 'eta': 4.7982446119697945e-06, 'C': 4.612378996510328e-06}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:49,118] Trial 50 finished with value: 0.54675 and parameters: {'iterations': 428, 'alpha': 0.0022698628566857784, 'eta': 1.1420971623478255e-06, 'C': 0.3945906120329447}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:49,287] Trial 51 finished with value: 0.590375 and parameters: {'iterations': 153, 'alpha': 0.0007625720533830088, 'eta': 0.06991534381062768, 'C': 1.2432580152786836e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:49,452] Trial 52 finished with value: 0.531125 and parameters: {'iterations': 66, 'alpha': 0.002938312732760288, 'eta': 0.00764600628996749, 'C': 1.0677301840079773e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:49,629] Trial 53 finished with value: 0.701375 and parameters: {'iterations': 250, 'alpha': 0.0007308750587332933, 'eta': 0.05252893789485576, 'C': 2.9409464248077125e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:49,812] Trial 54 finished with value: 0.623375 and parameters: {'iterations': 295, 'alpha': 0.012314641283826712, 'eta': 0.36350745708763904, 'C': 1.949501958876658e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:49,990] Trial 55 finished with value: 0.687625 and parameters: {'iterations': 221, 'alpha': 0.0009666629744615678, 'eta': 0.11763131384947022, 'C': 4.098555096598007e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:50,177] Trial 56 finished with value: 0.602625 and parameters: {'iterations': 367, 'alpha': 0.00012944893270085275, 'eta': 1.0295832381488574e-07, 'C': 1.377304292479218e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:50,349] Trial 57 finished with value: 0.5785 and parameters: {'iterations': 85, 'alpha': 1.134729374008475e-08, 'eta': 0.0021574664605241317, 'C': 7.829195467023754e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:50,547] Trial 58 finished with value: 0.689375 and parameters: {'iterations': 548, 'alpha': 0.0031233846757834005, 'eta': 0.016871013384347624, 'C': 0.00141639796252654}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:50,731] Trial 59 finished with value: 0.58325 and parameters: {'iterations': 237, 'alpha': 0.04892696600032414, 'eta': 1.6989405472108366e-05, 'C': 2.0411934244534127e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:50,905] Trial 60 finished with value: 0.607625 and parameters: {'iterations': 166, 'alpha': 4.6186980726436444e-05, 'eta': 0.007274079307701677, 'C': 0.00011203856269994003}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:51,076] Trial 61 finished with value: 0.676125 and parameters: {'iterations': 54, 'alpha': 0.0004195953801343902, 'eta': 0.052641610362839, 'C': 4.234303951297097e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:51,270] Trial 62 finished with value: 0.644625 and parameters: {'iterations': 134, 'alpha': 0.0009156203409753012, 'eta': 0.19103068882278085, 'C': 7.273166934498411e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:51,446] Trial 63 finished with value: 0.67 and parameters: {'iterations': 195, 'alpha': 0.0045450800087795395, 'eta': 0.04977900191533771, 'C': 1.3465307997895408e-06}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:51,630] Trial 64 finished with value: 0.709375 and parameters: {'iterations': 314, 'alpha': 0.0014878535514688046, 'eta': 0.09998587479742901, 'C': 3.022557276222044e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:51,819] Trial 65 finished with value: 0.688125 and parameters: {'iterations': 385, 'alpha': 0.00020025799027733959, 'eta': 0.09392489943315349, 'C': 2.647974661247569e-06}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:52,009] Trial 66 finished with value: 0.602875 and parameters: {'iterations': 464, 'alpha': 7.219505471652941e-05, 'eta': 0.492250485306823, 'C': 7.48916652462399e-06}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:52,191] Trial 67 finished with value: 0.686625 and parameters: {'iterations': 288, 'alpha': 3.903297890563397e-08, 'eta': 0.018204164883671974, 'C': 2.8936400895489707e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:52,392] Trial 68 finished with value: 0.662375 and parameters: {'iterations': 641, 'alpha': 0.008913889726702578, 'eta': 0.20131583460388583, 'C': 7.910041109557264e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:52,575] Trial 69 finished with value: 0.656875 and parameters: {'iterations': 317, 'alpha': 0.0012854522499155707, 'eta': 0.09345739337490126, 'C': 2.22999346093771e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:52,758] Trial 70 finished with value: 0.67 and parameters: {'iterations': 249, 'alpha': 0.0004877870246974759, 'eta': 0.009479433013508207, 'C': 0.000602603341287512}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:52,935] Trial 71 finished with value: 0.673125 and parameters: {'iterations': 195, 'alpha': 0.0037865297849244855, 'eta': 0.03770704402396635, 'C': 2.876522354901904e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:53,111] Trial 72 finished with value: 0.635625 and parameters: {'iterations': 147, 'alpha': 0.0012998448574910704, 'eta': 0.03519653553919924, 'C': 1.4505517773176715e-06}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:53,283] Trial 73 finished with value: 0.6715 and parameters: {'iterations': 105, 'alpha': 0.005678473532666575, 'eta': 0.2813395890755859, 'C': 4.858992045176339e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:53,469] Trial 74 finished with value: 0.644375 and parameters: {'iterations': 347, 'alpha': 0.00020094107225176514, 'eta': 0.9646129892126722, 'C': 1.5209912221817756e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:53,740] Trial 75 finished with value: 0.5915 and parameters: {'iterations': 1841, 'alpha': 4.65500904637387, 'eta': 0.00023626942553036507, 'C': 1.5928060437071133e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:54,042] Trial 76 finished with value: 0.6895 and parameters: {'iterations': 2381, 'alpha': 0.0019471306348642862, 'eta': 0.07278642365177082, 'C': 0.01615109380834233}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:54,227] Trial 77 finished with value: 0.679875 and parameters: {'iterations': 211, 'alpha': 0.0004507058269505485, 'eta': 0.016974591697213375, 'C': 4.242643090241776e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:54,406] Trial 78 finished with value: 0.693375 and parameters: {'iterations': 176, 'alpha': 0.01271307483426479, 'eta': 0.14246106578505174, 'C': 9.195817149008401e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:54,629] Trial 79 finished with value: 0.671125 and parameters: {'iterations': 989, 'alpha': 2.0260929286779715e-05, 'eta': 0.005226046703614689, 'C': 2.984903608310587e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:54,803] Trial 80 finished with value: 0.64375 and parameters: {'iterations': 85, 'alpha': 0.0009212748560026718, 'eta': 0.028821901107112743, 'C': 6.348232869461924e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:55,025] Trial 81 finished with value: 0.692375 and parameters: {'iterations': 868, 'alpha': 0.0005274585581908829, 'eta': 0.019952728891069152, 'C': 1.9960071821506923e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:55,282] Trial 82 finished with value: 0.67825 and parameters: {'iterations': 1534, 'alpha': 0.0002383795726087986, 'eta': 0.0023766545809855203, 'C': 6.072864467780797e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:55,522] Trial 83 finished with value: 0.685125 and parameters: {'iterations': 1255, 'alpha': 0.001280218272845484, 'eta': 0.054545140207152544, 'C': 1.0016160255917447e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:55,730] Trial 84 finished with value: 0.626 and parameters: {'iterations': 690, 'alpha': 0.0031037737657175933, 'eta': 0.5179973797316394, 'C': 1.023622908120723e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:56,002] Trial 85 finished with value: 0.656125 and parameters: {'iterations': 1861, 'alpha': 8.217545906938284e-05, 'eta': 0.2778986129110654, 'C': 4.193252990800076e-06}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:56,202] Trial 86 finished with value: 0.61 and parameters: {'iterations': 515, 'alpha': 0.0003514196069634085, 'eta': 0.13822262827263537, 'C': 2.612722961897217e-06}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:56,431] Trial 87 finished with value: 0.54625 and parameters: {'iterations': 1116, 'alpha': 0.00014691638509541026, 'eta': 1.1988457681929756e-08, 'C': 5.103176346436363e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:56,651] Trial 88 finished with value: 0.690125 and parameters: {'iterations': 925, 'alpha': 0.005208671589793777, 'eta': 0.026360266926695068, 'C': 2.2305596799993474e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:56,850] Trial 89 finished with value: 0.6885 and parameters: {'iterations': 432, 'alpha': 0.04288266485872644, 'eta': 0.067425383219647, 'C': 3.645487293754804e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:57,033] Trial 90 finished with value: 0.654875 and parameters: {'iterations': 311, 'alpha': 0.01930659805868684, 'eta': 0.010909159790393341, 'C': 1.2162184257555904e-06}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:57,289] Trial 91 finished with value: 0.690375 and parameters: {'iterations': 1538, 'alpha': 4.434619493289824e-06, 'eta': 0.006538405073463895, 'C': 2.347596058863151e-06}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:57,526] Trial 92 finished with value: 0.69225 and parameters: {'iterations': 1266, 'alpha': 4.008115852107697e-07, 'eta': 0.013325792305523746, 'C': 8.530386533161845e-06}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:57,707] Trial 93 finished with value: 0.678375 and parameters: {'iterations': 265, 'alpha': 0.0007281941207244675, 'eta': 0.04464387968429082, 'C': 2.9826634118251844e-05}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:57,897] Trial 94 finished with value: 0.68625 and parameters: {'iterations': 386, 'alpha': 0.0019296571511847934, 'eta': 0.11427531764799591, 'C': 1.3492065239897203e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:58,139] Trial 95 finished with value: 0.612625 and parameters: {'iterations': 1338, 'alpha': 0.007029687401112644, 'eta': 0.20260525845818947, 'C': 1.692804057880186e-06}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:58,370] Trial 96 finished with value: 0.599875 and parameters: {'iterations': 1118, 'alpha': 0.0003152875854297276, 'eta': 6.389137899473685e-05, 'C': 6.516892725638081e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:58,549] Trial 97 finished with value: 0.669125 and parameters: {'iterations': 222, 'alpha': 0.0011474300836984252, 'eta': 0.02492521901977397, 'C': 1.4289775439564014e-05}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:58,749] Trial 98 finished with value: 0.70025 and parameters: {'iterations': 595, 'alpha': 1.977716448695488e-05, 'eta': 0.08771099025418679, 'C': 1.0296821695545695e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:58,958] Trial 99 finished with value: 0.677125 and parameters: {'iterations': 737, 'alpha': 2.253431365386878e-05, 'eta': 0.3164461570669134, 'C': 3.019341053126604e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:59,162] Trial 100 finished with value: 0.700375 and parameters: {'iterations': 594, 'alpha': 0.0022074820538433297, 'eta': 0.7108119053953647, 'C': 8.780008190803176e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:59,366] Trial 101 finished with value: 0.681875 and parameters: {'iterations': 583, 'alpha': 0.0006100899219612981, 'eta': 0.07287920210329163, 'C': 1.0060553008634628e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:59,578] Trial 102 finished with value: 0.617 and parameters: {'iterations': 789, 'alpha': 0.0026920978797430983, 'eta': 0.5996862213280763, 'C': 1.600641741004458e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:59,774] Trial 103 finished with value: 0.50075 and parameters: {'iterations': 470, 'alpha': 0.0019614134617255553, 'eta': 0.1910456452121383, 'C': 2.813457010475196e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:38:59,974] Trial 104 finished with value: 0.693125 and parameters: {'iterations': 612, 'alpha': 0.003982684239556995, 'eta': 0.42241539785975885, 'C': 6.658341835124354e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:00,174] Trial 105 finished with value: 0.63525 and parameters: {'iterations': 530, 'alpha': 0.0015925153740162367, 'eta': 0.09504605900184077, 'C': 4.110801504797588e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:00,349] Trial 106 finished with value: 0.60375 and parameters: {'iterations': 118, 'alpha': 0.011321151237310836, 'eta': 0.6672143347382112, 'C': 2.0897378541529553e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:00,540] Trial 107 finished with value: 0.694375 and parameters: {'iterations': 393, 'alpha': 0.0001188883562299838, 'eta': 0.04120842353011695, 'C': 1.731482137995e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:00,718] Trial 108 finished with value: 0.625875 and parameters: {'iterations': 151, 'alpha': 0.0007354848242171688, 'eta': 0.15442189346597873, 'C': 1.0779710115796342e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:00,925] Trial 109 finished with value: 0.705 and parameters: {'iterations': 668, 'alpha': 0.0003151356769979196, 'eta': 0.2275331163111602, 'C': 3.9932187760225634e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:01,117] Trial 110 finished with value: 0.611375 and parameters: {'iterations': 348, 'alpha': 0.006458116923784284, 'eta': 0.26118209111429, 'C': 4.825310039260842e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:01,327] Trial 111 finished with value: 0.708125 and parameters: {'iterations': 681, 'alpha': 3.1129514917861897e-06, 'eta': 0.10198889590353069, 'C': 8.005337390824519e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:01,534] Trial 112 finished with value: 0.693 and parameters: {'iterations': 655, 'alpha': 7.47527951358333e-06, 'eta': 0.10978551453009006, 'C': 9.86730288349134e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:01,748] Trial 113 finished with value: 0.5915 and parameters: {'iterations': 736, 'alpha': 3.788548076528178e-06, 'eta': 0.07196829214678752, 'C': 5.6289687731968}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:01,947] Trial 114 finished with value: 0.6775 and parameters: {'iterations': 474, 'alpha': 0.002816624888040109, 'eta': 0.3970055421583492, 'C': 2.6934036525133418e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:02,153] Trial 115 finished with value: 0.642375 and parameters: {'iterations': 572, 'alpha': 2.3570886608197487e-07, 'eta': 0.741762668634754, 'C': 3.6223647637912874e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:02,333] Trial 116 finished with value: 0.6255 and parameters: {'iterations': 139, 'alpha': 0.0010596870286810735, 'eta': 0.18341039865880204, 'C': 6.44557316975796e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:02,526] Trial 117 finished with value: 0.64275 and parameters: {'iterations': 422, 'alpha': 3.616639161577246e-05, 'eta': 0.28041352836822936, 'C': 5.815403773013209e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:02,711] Trial 118 finished with value: 0.677375 and parameters: {'iterations': 279, 'alpha': 1.2145188940019394e-06, 'eta': 0.05080318402689568, 'C': 6.271992854025742e-06}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:02,891] Trial 119 finished with value: 0.699875 and parameters: {'iterations': 186, 'alpha': 1.5080675001601041e-06, 'eta': 0.12590494449244655, 'C': 2.1495988735373449e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:03,076] Trial 120 finished with value: 0.688625 and parameters: {'iterations': 239, 'alpha': 3.6508742835184448e-06, 'eta': 0.14233256162390415, 'C': 1.921820137685229e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:03,261] Trial 121 finished with value: 0.6835 and parameters: {'iterations': 193, 'alpha': 1.8485579207630255e-06, 'eta': 0.10974405851825185, 'C': 9.151738534470938e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:03,449] Trial 122 finished with value: 0.686125 and parameters: {'iterations': 320, 'alpha': 8.166269072738718e-07, 'eta': 0.07671672936923078, 'C': 3.1885646203809363e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:03,629] Trial 123 finished with value: 0.641375 and parameters: {'iterations': 171, 'alpha': 5.8901767440556715e-06, 'eta': 0.0346051040314222, 'C': 7.712270452343346e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:03,804] Trial 124 finished with value: 0.678125 and parameters: {'iterations': 126, 'alpha': 4.0151113745812724e-07, 'eta': 0.24028222774984145, 'C': 4.2697601275719374e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:03,982] Trial 125 finished with value: 0.5915 and parameters: {'iterations': 210, 'alpha': 0.0041316437983949334, 'eta': 1.712017420447776e-06, 'C': 1.5808061255123872e-06}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:04,192] Trial 126 finished with value: 0.64425 and parameters: {'iterations': 687, 'alpha': 2.011807478722432e-06, 'eta': 0.4365967032217305, 'C': 0.010642010464884978}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:04,372] Trial 127 finished with value: 0.636 and parameters: {'iterations': 160, 'alpha': 0.0015751173075594805, 'eta': 0.056016535721871694, 'C': 1.3611517407870468e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:04,554] Trial 128 finished with value: 0.673625 and parameters: {'iterations': 291, 'alpha': 8.743320700735077e-08, 'eta': 0.1339995359101044, 'C': 3.008578444125127e-06}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:04,726] Trial 129 finished with value: 0.653125 and parameters: {'iterations': 112, 'alpha': 0.0005127104404670457, 'eta': 0.08724213826756265, 'C': 1.4191509967301992e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:04,944] Trial 130 finished with value: 0.69 and parameters: {'iterations': 945, 'alpha': 0.0002483070805812077, 'eta': 0.02031072779549333, 'C': 4.951156869804348e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:05,159] Trial 131 finished with value: 0.67425 and parameters: {'iterations': 856, 'alpha': 0.0004111833225461947, 'eta': 0.04213777906149912, 'C': 2.0951498535809668e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:05,373] Trial 132 finished with value: 0.685375 and parameters: {'iterations': 840, 'alpha': 1.970758881569847e-05, 'eta': 0.024515582618218554, 'C': 7.229587974249926e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:05,582] Trial 133 finished with value: 0.6345 and parameters: {'iterations': 741, 'alpha': 0.00016995776448183804, 'eta': 0.937481907527567, 'C': 9.673262754560935e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:05,811] Trial 134 finished with value: 0.60475 and parameters: {'iterations': 1042, 'alpha': 0.0007740241835587951, 'eta': 0.16587768988954948, 'C': 2.556738718799372e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:05,987] Trial 135 finished with value: 0.56175 and parameters: {'iterations': 177, 'alpha': 6.302904937829893e-05, 'eta': 2.814863356177322e-08, 'C': 1.3438046275467962e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:06,198] Trial 136 finished with value: 0.632625 and parameters: {'iterations': 626, 'alpha': 0.0024171094945399894, 'eta': 0.37995002536516703, 'C': 3.2099660955576335e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:06,403] Trial 137 finished with value: 0.675625 and parameters: {'iterations': 542, 'alpha': 0.001143262113052305, 'eta': 0.030619181715681412, 'C': 4.5829416145184535e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:06,588] Trial 138 finished with value: 0.69 and parameters: {'iterations': 260, 'alpha': 0.0016697287334557752, 'eta': 0.06332253698895092, 'C': 5.582278201793905e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:06,766] Trial 139 finished with value: 0.658875 and parameters: {'iterations': 142, 'alpha': 2.435170622018222e-06, 'eta': 0.09392272597182019, 'C': 9.339923885130267e-08}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:07,034] Trial 140 finished with value: 0.678125 and parameters: {'iterations': 1709, 'alpha': 1.073531418794876e-05, 'eta': 0.22149049071740481, 'C': 2.441056172040441e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:07,289] Trial 141 finished with value: 0.692625 and parameters: {'iterations': 1478, 'alpha': 1.080646732585178e-06, 'eta': 0.01738948562538925, 'C': 2.290316592992125e-06}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:07,538] Trial 142 finished with value: 0.5915 and parameters: {'iterations': 1395, 'alpha': 0.0035647292531768063, 'eta': 0.010751190073464348, 'C': 1.4135023737630765}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:07,775] Trial 143 finished with value: 0.69825 and parameters: {'iterations': 1220, 'alpha': 0.00036711843373043275, 'eta': 0.039111599351089536, 'C': 3.931652218163724e-06}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:08,014] Trial 144 finished with value: 0.69375 and parameters: {'iterations': 1155, 'alpha': 0.0004488246926976236, 'eta': 0.030809823123049227, 'C': 4.469535471714788e-06}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:08,232] Trial 145 finished with value: 0.604875 and parameters: {'iterations': 885, 'alpha': 0.0002238162924940369, 'eta': 0.0007861116625838432, 'C': 1.1005715898231113e-06}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:08,457] Trial 146 finished with value: 0.706625 and parameters: {'iterations': 1061, 'alpha': 0.0003118270454065744, 'eta': 0.04769902221189346, 'C': 7.081531950031299e-07}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:08,650] Trial 147 finished with value: 0.681875 and parameters: {'iterations': 489, 'alpha': 0.00033043642908801206, 'eta': 0.06872466201727848, 'C': 1.6743496935216609e-06}. Best is trial 43 with value: 0.71025.
[I 2022-10-17 06:39:08,914] Trial 148 finished with value: 0.716125 and parameters: {'iterations': 1718, 'alpha': 0.0007210374693935831, 'eta': 0.1191234527543934, 'C': 5.59064680716863e-07}. Best is trial 148 with value: 0.716125.
[I 2022-10-17 06:39:09,198] Trial 149 finished with value: 0.713625 and parameters: {'iterations': 1986, 'alpha': 0.000728699376351833, 'eta': 0.15276050092301655, 'C': 6.887451932795202e-07}. Best is trial 148 with value: 0.716125.
[I 2022-10-17 06:39:09,496] Trial 150 finished with value: 0.5955 and parameters: {'iterations': 2323, 'alpha': 0.0008077455617309196, 'eta': 0.5947082509102792, 'C': 6.857835623244884e-07}. Best is trial 148 with value: 0.716125.
[I 2022-10-17 06:39:09,777] Trial 151 finished with value: 0.69975 and parameters: {'iterations': 1933, 'alpha': 0.0005950324485731204, 'eta': 0.10129910098783283, 'C': 1.0897695741872649e-06}. Best is trial 148 with value: 0.716125.
[I 2022-10-17 06:39:10,106] Trial 152 finished with value: 0.726125 and parameters: {'iterations': 2780, 'alpha': 9.280563918243176e-05, 'eta': 0.11842893099960039, 'C': 4.6081054861060306e-07}. Best is trial 152 with value: 0.726125.
[I 2022-10-17 06:39:10,444] Trial 153 finished with value: 0.7035 and parameters: {'iterations': 3147, 'alpha': 9.567595682999446e-05, 'eta': 0.1373787841398979, 'C': 3.757688937793713e-07}. Best is trial 152 with value: 0.726125.
[I 2022-10-17 06:39:10,803] Trial 154 finished with value: 0.7155 and parameters: {'iterations': 3523, 'alpha': 7.638320047715378e-05, 'eta': 0.16764673851944936, 'C': 4.025320486834992e-07}. Best is trial 152 with value: 0.726125.
[I 2022-10-17 06:39:11,125] Trial 155 finished with value: 0.661125 and parameters: {'iterations': 2818, 'alpha': 8.258687990345842e-05, 'eta': 0.2834338838321034, 'C': 4.723474465200717e-07}. Best is trial 152 with value: 0.726125.
[I 2022-10-17 06:39:11,481] Trial 156 finished with value: 0.696625 and parameters: {'iterations': 3405, 'alpha': 4.305668775251123e-05, 'eta': 0.18199668315103984, 'C': 7.813079457708526e-07}. Best is trial 152 with value: 0.726125.
[I 2022-10-17 06:39:11,813] Trial 157 finished with value: 0.617125 and parameters: {'iterations': 2893, 'alpha': 2.5711872021818553e-05, 'eta': 0.1587613298860412, 'C': 3.609489611862614e-07}. Best is trial 152 with value: 0.726125.
[I 2022-10-17 06:39:12,232] Trial 158 finished with value: 0.714 and parameters: {'iterations': 4546, 'alpha': 9.617914773426665e-05, 'eta': 0.32505571470256495, 'C': 3.231202052963065e-07}. Best is trial 152 with value: 0.726125.
[I 2022-10-17 06:39:12,642] Trial 159 finished with value: 0.7135 and parameters: {'iterations': 4394, 'alpha': 0.00016303416525926455, 'eta': 0.3050248612066312, 'C': 6.026560396140189e-07}. Best is trial 152 with value: 0.726125.
[I 2022-10-17 06:39:13,074] Trial 160 finished with value: 0.6425 and parameters: {'iterations': 4693, 'alpha': 0.00010588178622412463, 'eta': 0.28580267442795193, 'C': 1.2473565582692714e-06}. Best is trial 152 with value: 0.726125.
[I 2022-10-17 06:39:13,448] Trial 161 finished with value: 0.668625 and parameters: {'iterations': 3649, 'alpha': 0.00011955551344197934, 'eta': 0.4852902272429, 'C': 6.64417574206979e-07}. Best is trial 152 with value: 0.726125.
[I 2022-10-17 06:39:13,846] Trial 162 finished with value: 0.676125 and parameters: {'iterations': 4075, 'alpha': 0.00016028522998458606, 'eta': 0.3459817270469861, 'C': 5.806803992356877e-07}. Best is trial 152 with value: 0.726125.
[I 2022-10-17 06:39:14,219] Trial 163 finished with value: 0.62025 and parameters: {'iterations': 3647, 'alpha': 0.00016537848179764452, 'eta': 0.21424763754759593, 'C': 3.6858187972320273e-07}. Best is trial 152 with value: 0.726125.
[I 2022-10-17 06:39:14,623] Trial 164 finished with value: 0.61175 and parameters: {'iterations': 4143, 'alpha': 7.31137487395049e-05, 'eta': 0.13384129862890612, 'C': 1.8581383120443808e-06}. Best is trial 152 with value: 0.726125.
[I 2022-10-17 06:39:15,032] Trial 165 finished with value: 0.613 and parameters: {'iterations': 4290, 'alpha': 0.00027607591170553336, 'eta': 0.5265238743762376, 'C': 8.994609122499982e-07}. Best is trial 152 with value: 0.726125.
[I 2022-10-17 06:39:15,382] Trial 166 finished with value: 0.687625 and parameters: {'iterations': 3221, 'alpha': 4.680841375497953e-05, 'eta': 0.20675753322598367, 'C': 3.1229706308885645e-07}. Best is trial 152 with value: 0.726125.
[I 2022-10-17 06:39:15,669] Trial 167 finished with value: 0.712875 and parameters: {'iterations': 2083, 'alpha': 0.0010533639436714033, 'eta': 0.301830461716148, 'C': 5.523185174373661e-07}. Best is trial 152 with value: 0.726125.
[I 2022-10-17 06:39:15,991] Trial 168 finished with value: 0.704625 and parameters: {'iterations': 2634, 'alpha': 0.0011348988337217278, 'eta': 0.3499097787716338, 'C': 7.743448294910756e-05}. Best is trial 152 with value: 0.726125.
[I 2022-10-17 06:39:16,435] Trial 169 finished with value: 0.62725 and parameters: {'iterations': 4986, 'alpha': 0.001041356847243019, 'eta': 0.2929235259807866, 'C': 0.00010992802811324549}. Best is trial 152 with value: 0.726125.
[I 2022-10-17 06:39:16,722] Trial 170 finished with value: 0.597125 and parameters: {'iterations': 2137, 'alpha': 0.0006171503839761714, 'eta': 0.3643254386434498, 'C': 0.0005585840169759749}. Best is trial 152 with value: 0.726125.
[I 2022-10-17 06:39:17,037] Trial 171 finished with value: 0.622 and parameters: {'iterations': 2578, 'alpha': 0.001543942148616462, 'eta': 0.1395176909179107, 'C': 5.039032953128342e-07}. Best is trial 152 with value: 0.726125.
[I 2022-10-17 06:39:17,341] Trial 172 finished with value: 0.705625 and parameters: {'iterations': 2386, 'alpha': 0.0008899707095234345, 'eta': 0.11116117915717681, 'C': 2.322832863181346e-05}. Best is trial 152 with value: 0.726125.
[I 2022-10-17 06:39:17,685] Trial 173 finished with value: 0.69475 and parameters: {'iterations': 3073, 'alpha': 0.0002235197881991875, 'eta': 0.20894683636942446, 'C': 1.613885558014544e-05}. Best is trial 152 with value: 0.726125.
[I 2022-10-17 06:39:17,982] Trial 174 finished with value: 0.69725 and parameters: {'iterations': 2313, 'alpha': 0.0009393502348456225, 'eta': 0.09606037626100257, 'C': 2.3890599899809807e-05}. Best is trial 152 with value: 0.726125.
[I 2022-10-17 06:39:18,296] Trial 175 finished with value: 0.664375 and parameters: {'iterations': 2549, 'alpha': 0.0005729405280717109, 'eta': 0.1328412088884533, 'C': 3.47246912993628e-05}. Best is trial 152 with value: 0.726125.
[I 2022-10-17 06:39:18,664] Trial 176 finished with value: 0.7445 and parameters: {'iterations': 3568, 'alpha': 0.00010732574494827194, 'eta': 0.331433832469533, 'C': 0.00010004557227399993}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:19,042] Trial 177 finished with value: 0.68625 and parameters: {'iterations': 3684, 'alpha': 5.739547489827284e-05, 'eta': 0.3924152806555467, 'C': 0.0001104709334644102}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:19,456] Trial 178 finished with value: 0.709125 and parameters: {'iterations': 4401, 'alpha': 9.732580869578713e-05, 'eta': 0.2559708099827465, 'C': 8.171249052585834e-05}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:19,869] Trial 179 finished with value: 0.628625 and parameters: {'iterations': 4385, 'alpha': 0.00012690822062593211, 'eta': 0.26759867454028347, 'C': 7.903103067982607e-05}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:20,224] Trial 180 finished with value: 0.53075 and parameters: {'iterations': 3401, 'alpha': 8.913360106110737e-05, 'eta': 0.6758631923792735, 'C': 0.00015383709550749868}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:20,556] Trial 181 finished with value: 0.704125 and parameters: {'iterations': 2971, 'alpha': 0.00029235822095443796, 'eta': 0.1675341574404386, 'C': 4.332199817738538e-05}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:20,870] Trial 182 finished with value: 0.691875 and parameters: {'iterations': 2637, 'alpha': 0.00015562237434918005, 'eta': 0.17450707441580707, 'C': 0.0002638779245454259}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:21,198] Trial 183 finished with value: 0.663 and parameters: {'iterations': 2892, 'alpha': 0.00027064197651641765, 'eta': 0.3098757180795024, 'C': 4.548622213483622e-05}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:21,597] Trial 184 finished with value: 0.71575 and parameters: {'iterations': 3908, 'alpha': 9.86630092521581e-05, 'eta': 0.11928101734212677, 'C': 6.55826147201369e-05}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:21,990] Trial 185 finished with value: 0.61325 and parameters: {'iterations': 3969, 'alpha': 0.00039146651398982963, 'eta': 0.44186247947557594, 'C': 0.00015983677091711392}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:22,432] Trial 186 finished with value: 0.6915 and parameters: {'iterations': 4682, 'alpha': 6.237394894744893e-05, 'eta': 0.21687878467952632, 'C': 7.287139018072246e-05}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:22,724] Trial 187 finished with value: 0.628625 and parameters: {'iterations': 2077, 'alpha': 0.0002358717583778537, 'eta': 0.09917444308506869, 'C': 4.843215943431122e-05}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:23,107] Trial 188 finished with value: 0.593875 and parameters: {'iterations': 3681, 'alpha': 0.00016839626622516452, 'eta': 1.833609598401089e-07, 'C': 6.105880565702129e-05}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:23,529] Trial 189 finished with value: 0.697125 and parameters: {'iterations': 4415, 'alpha': 3.4615814831789044e-05, 'eta': 0.07874303215298904, 'C': 0.00018806460932451898}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:23,836] Trial 190 finished with value: 0.698 and parameters: {'iterations': 2409, 'alpha': 0.0005515329396234589, 'eta': 0.16890013325687828, 'C': 3.417846299290415e-05}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:24,186] Trial 191 finished with value: 0.654625 and parameters: {'iterations': 3194, 'alpha': 9.894872300245691e-05, 'eta': 0.1258712457913073, 'C': 0.0005630855743751539}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:24,520] Trial 192 finished with value: 0.653375 and parameters: {'iterations': 2936, 'alpha': 8.638434866634098e-05, 'eta': 0.2399263374693293, 'C': 0.00011083176311599185}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:24,883] Trial 193 finished with value: 0.64425 and parameters: {'iterations': 3418, 'alpha': 0.00012901432835598855, 'eta': 0.11884648927234413, 'C': 8.311936491429304e-05}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:25,167] Trial 194 finished with value: 0.604875 and parameters: {'iterations': 1939, 'alpha': 0.00037475965621552355, 'eta': 0.33611114334564124, 'C': 4.1531561183295755e-05}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:25,553] Trial 195 finished with value: 0.7025 and parameters: {'iterations': 3799, 'alpha': 5.190252960420132e-05, 'eta': 0.06110868831043108, 'C': 2.5299302423304182e-05}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:25,942] Trial 196 finished with value: 0.695875 and parameters: {'iterations': 2286, 'alpha': 0.0011300000006177053, 'eta': 0.1623654418584027, 'C': 3.4003114224967373e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:26,284] Trial 197 finished with value: 0.6955 and parameters: {'iterations': 2721, 'alpha': 0.00019983972811774844, 'eta': 0.5624476356972947, 'C': 0.00040136154185657413}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:26,630] Trial 198 finished with value: 0.658375 and parameters: {'iterations': 3171, 'alpha': 9.559243529896955e-05, 'eta': 0.22619607862989483, 'C': 6.496857033344897e-05}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:27,029] Trial 199 finished with value: 0.7155 and parameters: {'iterations': 4233, 'alpha': 3.5074591608655914e-05, 'eta': 0.0849138233314975, 'C': 2.0364566105539173e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:27,451] Trial 200 finished with value: 0.71675 and parameters: {'iterations': 4610, 'alpha': 3.464952546203181e-05, 'eta': 0.09062146590457525, 'C': 1.7824616724811137e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:27,893] Trial 201 finished with value: 0.688875 and parameters: {'iterations': 4970, 'alpha': 7.375062039322304e-05, 'eta': 0.07975212237762347, 'C': 1.6319247424908282e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:28,304] Trial 202 finished with value: 0.702875 and parameters: {'iterations': 4353, 'alpha': 2.738969283747652e-05, 'eta': 0.059312866628882445, 'C': 2.387083674411238e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:28,566] Trial 203 finished with value: 0.584625 and parameters: {'iterations': 1702, 'alpha': 3.807153882261923e-05, 'eta': 1.507196692813706e-05, 'C': 4.935255100293972e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:28,953] Trial 204 finished with value: 0.71775 and parameters: {'iterations': 3987, 'alpha': 1.4414646849685312e-05, 'eta': 0.0932169555382396, 'C': 6.144996006576982e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:29,338] Trial 205 finished with value: 0.699625 and parameters: {'iterations': 3862, 'alpha': 8.10400546925979e-06, 'eta': 0.10550121829861049, 'C': 7.51354508564229e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:29,754] Trial 206 finished with value: 0.6745 and parameters: {'iterations': 4495, 'alpha': 2.900400083371955e-05, 'eta': 0.09003625952226975, 'C': 1.1572576799855705e-06}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:30,146] Trial 207 finished with value: 0.69 and parameters: {'iterations': 4065, 'alpha': 1.2483879779365291e-05, 'eta': 0.048070682062424715, 'C': 1.6434343430541614e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:30,542] Trial 208 finished with value: 0.635625 and parameters: {'iterations': 4104, 'alpha': 5.2328354756656896e-05, 'eta': 0.28746165029992715, 'C': 5.744200185388235e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:30,988] Trial 209 finished with value: 0.720875 and parameters: {'iterations': 4990, 'alpha': 4.974300408602346e-05, 'eta': 0.128548202929913, 'C': 2.8108255161550376e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:31,414] Trial 210 finished with value: 0.71225 and parameters: {'iterations': 4665, 'alpha': 1.4914090432919155e-05, 'eta': 0.06555545566771384, 'C': 3.319324046332202e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:31,846] Trial 211 finished with value: 0.714 and parameters: {'iterations': 4734, 'alpha': 1.8630442383784345e-05, 'eta': 0.08190678410587747, 'C': 3.148558629933765e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:32,288] Trial 212 finished with value: 0.7085 and parameters: {'iterations': 4856, 'alpha': 1.7120361535504542e-05, 'eta': 0.068190184459041, 'C': 2.469137430815404e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:32,733] Trial 213 finished with value: 0.697375 and parameters: {'iterations': 4991, 'alpha': 1.3344122411221555e-05, 'eta': 0.05346077793371878, 'C': 2.582597465613356e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:33,159] Trial 214 finished with value: 0.70425 and parameters: {'iterations': 4709, 'alpha': 1.734915753961138e-05, 'eta': 0.06792234241008353, 'C': 2.6964549295803126e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:33,573] Trial 215 finished with value: 0.7095 and parameters: {'iterations': 4498, 'alpha': 1.3582576774898424e-05, 'eta': 0.0829131577143491, 'C': 1.5472160501030982e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:33,987] Trial 216 finished with value: 0.723625 and parameters: {'iterations': 4467, 'alpha': 1.6577117761815582e-05, 'eta': 0.07053973545518633, 'C': 1.7704097747349721e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:34,406] Trial 217 finished with value: 0.636875 and parameters: {'iterations': 4486, 'alpha': 1.559963698360526e-05, 'eta': 0.07616255210601204, 'C': 1.6119899533696542e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:34,811] Trial 218 finished with value: 0.693125 and parameters: {'iterations': 4211, 'alpha': 9.2241999560121e-06, 'eta': 0.08891946496141993, 'C': 1.3337286774612933e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:35,234] Trial 219 finished with value: 0.625 and parameters: {'iterations': 4612, 'alpha': 2.3290981794109528e-05, 'eta': 0.121881860410475, 'C': 2.61231059117892e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:35,624] Trial 220 finished with value: 0.7005 and parameters: {'iterations': 3879, 'alpha': 2.083726305697151e-05, 'eta': 0.06011309422871887, 'C': 4.094980791041847e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:36,064] Trial 221 finished with value: 0.7225 and parameters: {'iterations': 4776, 'alpha': 3.089207334058591e-05, 'eta': 0.04286302162451096, 'C': 1.8928253457154913e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:36,496] Trial 222 finished with value: 0.682625 and parameters: {'iterations': 4806, 'alpha': 4.925123040285824e-06, 'eta': 0.040791231413150435, 'C': 1.8647127396165994e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:36,906] Trial 223 finished with value: 0.70475 and parameters: {'iterations': 4316, 'alpha': 3.618923469553061e-05, 'eta': 0.08007267700749619, 'C': 3.5948553109328436e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:37,346] Trial 224 finished with value: 0.717375 and parameters: {'iterations': 4898, 'alpha': 1.3318463299974165e-05, 'eta': 0.14705760992582875, 'C': 1.8689818553405792e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:37,742] Trial 225 finished with value: 0.6005 and parameters: {'iterations': 4067, 'alpha': 1.2534640750249776e-05, 'eta': 0.15006417162485017, 'C': 0.137612271714467}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:38,204] Trial 226 finished with value: 0.619375 and parameters: {'iterations': 4981, 'alpha': 9.017200877480145e-06, 'eta': 0.16608361149254364, 'C': 1.1374392010077657e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:38,572] Trial 227 finished with value: 0.6985 and parameters: {'iterations': 3624, 'alpha': 2.8784956059478653e-05, 'eta': 0.1276841628701522, 'C': 2.024022285965156e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:39,001] Trial 228 finished with value: 0.72225 and parameters: {'iterations': 4505, 'alpha': 1.3587198917511222e-05, 'eta': 0.06941240643039209, 'C': 2.5642927240950256e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:39,426] Trial 229 finished with value: 0.68675 and parameters: {'iterations': 4454, 'alpha': 3.969809311952839e-05, 'eta': 0.20458497679752383, 'C': 1.2999211026726353e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:39,821] Trial 230 finished with value: 0.695625 and parameters: {'iterations': 3967, 'alpha': 6.67841747506401e-06, 'eta': 0.10066696768640104, 'C': 3.721277966847949e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:40,281] Trial 231 finished with value: 0.709875 and parameters: {'iterations': 4983, 'alpha': 2.4247638303653504e-05, 'eta': 0.06622844689326833, 'C': 2.2054991241890774e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:40,695] Trial 232 finished with value: 0.712 and parameters: {'iterations': 4447, 'alpha': 1.7537923250455188e-05, 'eta': 0.04337266192198336, 'C': 1.8146401833168818e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:41,112] Trial 233 finished with value: 0.713375 and parameters: {'iterations': 4528, 'alpha': 1.6359079084381293e-05, 'eta': 0.03740404890788209, 'C': 1.6382003994244144e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:41,532] Trial 234 finished with value: 0.70675 and parameters: {'iterations': 4538, 'alpha': 1.6093316545562823e-05, 'eta': 0.03420972159935617, 'C': 7.694930353616414e-08}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:41,904] Trial 235 finished with value: 0.715625 and parameters: {'iterations': 3490, 'alpha': 1.2830597142697015e-05, 'eta': 0.031051901632536454, 'C': 1.2973543694718108e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:42,281] Trial 236 finished with value: 0.699375 and parameters: {'iterations': 3537, 'alpha': 2.764150271306496e-05, 'eta': 0.03586626894239681, 'C': 2.0070450156241184e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:42,676] Trial 237 finished with value: 0.67325 and parameters: {'iterations': 3961, 'alpha': 9.63827333744225e-06, 'eta': 0.04755947042653785, 'C': 1.0767644947493352e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:43,079] Trial 238 finished with value: 0.70525 and parameters: {'iterations': 4161, 'alpha': 2.4759747741137774e-05, 'eta': 0.024339260955155904, 'C': 2.8040292922174124e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:43,526] Trial 239 finished with value: 0.7025 and parameters: {'iterations': 4957, 'alpha': 5.021337779347072e-05, 'eta': 0.05335292950570602, 'C': 4.4346240965664634e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:43,889] Trial 240 finished with value: 0.713375 and parameters: {'iterations': 3507, 'alpha': 1.884998969013102e-05, 'eta': 0.0401073545350669, 'C': 1.8008259105817005e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:44,268] Trial 241 finished with value: 0.703625 and parameters: {'iterations': 3736, 'alpha': 1.6490125550581856e-05, 'eta': 0.036334565987431365, 'C': 1.9553763314475966e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:44,634] Trial 242 finished with value: 0.692375 and parameters: {'iterations': 3531, 'alpha': 1.1126877300645909e-05, 'eta': 0.05074133633858066, 'C': 8.564698302928345e-08}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:45,049] Trial 243 finished with value: 0.71525 and parameters: {'iterations': 4228, 'alpha': 2.0971150418268264e-05, 'eta': 0.06249430875209056, 'C': 1.386280181547973e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:45,462] Trial 244 finished with value: 0.70625 and parameters: {'iterations': 4150, 'alpha': 1.9485992206725795e-05, 'eta': 0.02603487505213446, 'C': 1.204031912528453e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:45,819] Trial 245 finished with value: 0.65325 and parameters: {'iterations': 3358, 'alpha': 6.112417706163111e-06, 'eta': 0.040804486895650686, 'C': 5.206130444059813e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:46,212] Trial 246 finished with value: 0.7125 and parameters: {'iterations': 3845, 'alpha': 3.8155588595904536e-05, 'eta': 0.0632996375577364, 'C': 1.5244563115648288e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:46,591] Trial 247 finished with value: 0.697125 and parameters: {'iterations': 3823, 'alpha': 3.9390791221491334e-05, 'eta': 0.05729868700555934, 'C': 1.4729822861116046e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:46,996] Trial 248 finished with value: 0.701625 and parameters: {'iterations': 4298, 'alpha': 5.904527245281022e-05, 'eta': 0.027992498505077314, 'C': 0.0018450119707174835}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:47,382] Trial 249 finished with value: 0.676875 and parameters: {'iterations': 3902, 'alpha': 3.0674354690981004e-05, 'eta': 0.07505671783792074, 'C': 2.936766336807053e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:47,818] Trial 250 finished with value: 0.707125 and parameters: {'iterations': 4583, 'alpha': 1.0592969196808695e-05, 'eta': 0.020086211774657373, 'C': 7.382319045316436e-08}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:48,186] Trial 251 finished with value: 0.70825 and parameters: {'iterations': 3555, 'alpha': 1.762694585129669e-05, 'eta': 0.042677167161034285, 'C': 1.6229575851118024e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:48,590] Trial 252 finished with value: 0.720625 and parameters: {'iterations': 4216, 'alpha': 4.3588043914926034e-05, 'eta': 0.10886608261432498, 'C': 3.334968879308616e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:48,984] Trial 253 finished with value: 0.699125 and parameters: {'iterations': 4069, 'alpha': 4.8839365371988874e-05, 'eta': 0.1137006283303534, 'C': 3.3829911328302856e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:49,341] Trial 254 finished with value: 0.72125 and parameters: {'iterations': 3372, 'alpha': 3.4861400895028814e-05, 'eta': 0.08997750041684353, 'C': 5.3994128629799e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:49,716] Trial 255 finished with value: 0.697625 and parameters: {'iterations': 3334, 'alpha': 4.288737750758242e-05, 'eta': 0.11176011379451363, 'C': 5.881351877976677e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:50,091] Trial 256 finished with value: 0.705125 and parameters: {'iterations': 3699, 'alpha': 6.385876599775998e-05, 'eta': 0.09069126376955472, 'C': 4.953540489435401e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:50,446] Trial 257 finished with value: 0.682 and parameters: {'iterations': 3348, 'alpha': 3.2820249786981044e-05, 'eta': 0.15948468559312481, 'C': 0.005906388813695317}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:50,844] Trial 258 finished with value: 0.7135 and parameters: {'iterations': 4012, 'alpha': 6.321777974231789e-05, 'eta': 0.1606012445259074, 'C': 7.770589644905501e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:51,254] Trial 259 finished with value: 0.62825 and parameters: {'iterations': 4175, 'alpha': 7.168112397593426e-05, 'eta': 0.1685479251388551, 'C': 8.966556603176709e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:51,671] Trial 260 finished with value: 0.630125 and parameters: {'iterations': 4302, 'alpha': 2.813367723690813e-05, 'eta': 0.13012961358793787, 'C': 6.523529661074745e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:52,016] Trial 261 finished with value: 0.695625 and parameters: {'iterations': 3124, 'alpha': 6.363651948295954e-05, 'eta': 0.21148369849625623, 'C': 3.834087584193924e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:52,385] Trial 262 finished with value: 0.688375 and parameters: {'iterations': 3551, 'alpha': 7.117655820728682e-06, 'eta': 0.10516263922663599, 'C': 8.380042260258426e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:52,832] Trial 263 finished with value: 0.67525 and parameters: {'iterations': 4966, 'alpha': 2.2723096791813743e-05, 'eta': 0.15970786467007758, 'C': 2.892321548453113e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:53,225] Trial 264 finished with value: 0.71475 and parameters: {'iterations': 3931, 'alpha': 0.00011687483218243118, 'eta': 0.0837855261222425, 'C': 4.922644153694858e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:53,610] Trial 265 finished with value: 0.700625 and parameters: {'iterations': 3826, 'alpha': 0.0001185974689407949, 'eta': 0.08722970392573495, 'C': 2.6256476815659196e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:54,012] Trial 266 finished with value: 0.7015 and parameters: {'iterations': 4125, 'alpha': 6.521508503530636e-05, 'eta': 0.07517161620935185, 'C': 1.0288085293726795e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:54,434] Trial 267 finished with value: 0.704 and parameters: {'iterations': 4544, 'alpha': 4.6203439459986086e-05, 'eta': 0.12621018261620132, 'C': 1.4707891794903537e-06}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:54,830] Trial 268 finished with value: 0.70525 and parameters: {'iterations': 3880, 'alpha': 9.042260179142585e-05, 'eta': 0.08909371023208786, 'C': 4.5305077666452896e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:55,196] Trial 269 finished with value: 0.722875 and parameters: {'iterations': 3364, 'alpha': 2.610547568316645e-05, 'eta': 0.0607263864080516, 'C': 2.325177082045087e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:55,565] Trial 270 finished with value: 0.68575 and parameters: {'iterations': 3303, 'alpha': 3.106948624961568e-05, 'eta': 0.12148782662975822, 'C': 2.3664017789204614e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:55,990] Trial 271 finished with value: 0.697625 and parameters: {'iterations': 4431, 'alpha': 0.00013446779936008579, 'eta': 0.06252145248390051, 'C': 6.95914703602121e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:56,439] Trial 272 finished with value: 0.7275 and parameters: {'iterations': 4996, 'alpha': 6.738138954999659e-05, 'eta': 0.20126880275253053, 'C': 4.0778466926436953e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:56,837] Trial 273 finished with value: 0.678375 and parameters: {'iterations': 4117, 'alpha': 7.467584773522857e-05, 'eta': 0.19862213464714318, 'C': 4.3780473643039114e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:57,281] Trial 274 finished with value: 0.641625 and parameters: {'iterations': 4939, 'alpha': 5.356278735553515e-05, 'eta': 0.22338819539910107, 'C': 8.431726870622569e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:57,725] Trial 275 finished with value: 0.646625 and parameters: {'iterations': 4999, 'alpha': 0.00012770573896044582, 'eta': 0.14694563800509428, 'C': 0.05494079052813538}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:58,062] Trial 276 finished with value: 0.688375 and parameters: {'iterations': 3013, 'alpha': 4.710125853480787e-05, 'eta': 0.09591701120448826, 'C': 3.36582956451013e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:58,442] Trial 277 finished with value: 0.70275 and parameters: {'iterations': 3767, 'alpha': 2.7932211513306418e-05, 'eta': 0.3623379045741027, 'C': 1.2937999500503257e-06}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:58,848] Trial 278 finished with value: 0.654625 and parameters: {'iterations': 4254, 'alpha': 8.480402142046808e-05, 'eta': 0.1591517619700077, 'C': 5.871722480357366e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:59,293] Trial 279 finished with value: 0.595375 and parameters: {'iterations': 4461, 'alpha': 0.0001597522483678277, 'eta': 6.031895400333616e-05, 'C': 2.6452402768795326e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:39:59,668] Trial 280 finished with value: 0.715375 and parameters: {'iterations': 3533, 'alpha': 4.3167495641885094e-05, 'eta': 0.11303875042549216, 'C': 4.0725068290898497e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:00,057] Trial 281 finished with value: 0.674375 and parameters: {'iterations': 3240, 'alpha': 4.711639267711844e-05, 'eta': 0.09005150697267976, 'C': 4.013292202881484e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:00,442] Trial 282 finished with value: 0.70775 and parameters: {'iterations': 3483, 'alpha': 1.1639293816118572e-05, 'eta': 0.06934454079784967, 'C': 2.33910167614183e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:00,834] Trial 283 finished with value: 0.708 and parameters: {'iterations': 3850, 'alpha': 3.5189288785326156e-05, 'eta': 0.11277793384852186, 'C': 3.971096416840897e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:01,172] Trial 284 finished with value: 0.705 and parameters: {'iterations': 2908, 'alpha': 3.1069995901974776e-05, 'eta': 0.13744867894999024, 'C': 1.0859727160625248e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:01,600] Trial 285 finished with value: 0.727 and parameters: {'iterations': 4639, 'alpha': 0.00011615696618782139, 'eta': 0.21576254092701713, 'C': 2.562959726728987e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:02,044] Trial 286 finished with value: 0.62275 and parameters: {'iterations': 4648, 'alpha': 0.21117954550565615, 'eta': 0.22890974984532284, 'C': 2.1053653759174883e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:02,417] Trial 287 finished with value: 0.68475 and parameters: {'iterations': 3567, 'alpha': 2.3840301641065667e-05, 'eta': 0.06414321608961568, 'C': 1.1762094544488545e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:02,815] Trial 288 finished with value: 0.617375 and parameters: {'iterations': 4153, 'alpha': 8.877748877510586e-05, 'eta': 0.0002012669440360515, 'C': 5.738986291583367e-08}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:03,265] Trial 289 finished with value: 0.711125 and parameters: {'iterations': 4977, 'alpha': 8.593830076035577e-06, 'eta': 0.08873882718659971, 'C': 2.4428669979528147e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:03,699] Trial 290 finished with value: 0.715 and parameters: {'iterations': 4583, 'alpha': 4.396120430992367e-05, 'eta': 0.05553837437540497, 'C': 3.3875992886016593e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:04,122] Trial 291 finished with value: 0.691 and parameters: {'iterations': 4454, 'alpha': 4.169404391844828e-05, 'eta': 0.05752617523122418, 'C': 3.1447532686057307e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:04,558] Trial 292 finished with value: 0.703875 and parameters: {'iterations': 4521, 'alpha': 2.1620291978844676e-05, 'eta': 0.052191461873369115, 'C': 0.000997136108083065}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:04,962] Trial 293 finished with value: 0.592625 and parameters: {'iterations': 3794, 'alpha': 6.732674552770352e-05, 'eta': 1.7927646137302098e-06, 'C': 1.5590316588372274e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:05,368] Trial 294 finished with value: 0.703625 and parameters: {'iterations': 4097, 'alpha': 1.2209575606381457e-05, 'eta': 0.08317601323614697, 'C': 1.995262468223779e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:05,828] Trial 295 finished with value: 0.70025 and parameters: {'iterations': 4989, 'alpha': 4.0353275413221334e-05, 'eta': 0.05020881104265479, 'C': 4.2733974302217737e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:06,270] Trial 296 finished with value: 0.70975 and parameters: {'iterations': 4625, 'alpha': 9.775007932948418e-05, 'eta': 0.10448918353764913, 'C': 3.054219599019124e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:06,630] Trial 297 finished with value: 0.71225 and parameters: {'iterations': 3325, 'alpha': 2.2069353591526815e-05, 'eta': 0.21926580727329464, 'C': 9.591806042108748e-08}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:07,036] Trial 298 finished with value: 0.705875 and parameters: {'iterations': 4085, 'alpha': 5.314756468093754e-05, 'eta': 0.029437524448444018, 'C': 1.751334799049127e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:07,419] Trial 299 finished with value: 0.6965 and parameters: {'iterations': 3645, 'alpha': 3.272313849937885e-05, 'eta': 0.07519913561149183, 'C': 4.774059480906581e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:07,853] Trial 300 finished with value: 0.727 and parameters: {'iterations': 4555, 'alpha': 4.701525026723403e-06, 'eta': 0.11986727307932644, 'C': 2.812114059492484e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:08,264] Trial 301 finished with value: 0.718875 and parameters: {'iterations': 4233, 'alpha': 8.480094307388101e-06, 'eta': 0.12183784100715432, 'C': 1.3002129442122404e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:08,613] Trial 302 finished with value: 0.702 and parameters: {'iterations': 3085, 'alpha': 3.157521492809748e-06, 'eta': 0.1196054890547736, 'C': 9.135633021430045e-08}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:09,003] Trial 303 finished with value: 0.62625 and parameters: {'iterations': 3689, 'alpha': 4.726315968078229e-06, 'eta': 0.12610763886635598, 'C': 5.234322684486442e-08}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:09,409] Trial 304 finished with value: 0.692625 and parameters: {'iterations': 4214, 'alpha': 8.495279385803007e-06, 'eta': 0.05394534457193247, 'C': 1.0561072523426987e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:09,808] Trial 305 finished with value: 0.694625 and parameters: {'iterations': 3920, 'alpha': 4.3983850214455816e-06, 'eta': 0.001640130328480502, 'C': 1.4361204725406278e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:10,179] Trial 306 finished with value: 0.673875 and parameters: {'iterations': 3437, 'alpha': 7.770197215725279e-06, 'eta': 0.1797693551934722, 'C': 7.348994465952843e-08}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:10,586] Trial 307 finished with value: 0.5835 and parameters: {'iterations': 4243, 'alpha': 1.2388838289981558e-05, 'eta': 0.11910231312407087, 'C': 2.2050319193454583e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:11,014] Trial 308 finished with value: 0.706125 and parameters: {'iterations': 4569, 'alpha': 5.1365562763411445e-06, 'eta': 0.06953513224680641, 'C': 1.4537862400685436e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:11,466] Trial 309 finished with value: 0.586125 and parameters: {'iterations': 4993, 'alpha': 1.702988403581094, 'eta': 0.16346169852554363, 'C': 2.111264765079119e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:11,854] Trial 310 finished with value: 0.723 and parameters: {'iterations': 3797, 'alpha': 1.3103919332126274e-05, 'eta': 0.10121196969279596, 'C': 4.382696645844559e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:12,201] Trial 311 finished with value: 0.603 and parameters: {'iterations': 3161, 'alpha': 1.3271301354955598e-05, 'eta': 0.25718630040785223, 'C': 1.2542957962298533e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:12,579] Trial 312 finished with value: 0.690125 and parameters: {'iterations': 3628, 'alpha': 6.027897513741132e-06, 'eta': 0.12115833410844314, 'C': 3.090349643992818e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:12,986] Trial 313 finished with value: 0.713 and parameters: {'iterations': 4263, 'alpha': 9.884228292853665e-06, 'eta': 0.03987874465563296, 'C': 2.145286885883274e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:13,362] Trial 314 finished with value: 0.68975 and parameters: {'iterations': 3412, 'alpha': 2.3349203205144856e-05, 'eta': 0.21400824781145777, 'C': 5.582840946021656e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:13,756] Trial 315 finished with value: 0.696125 and parameters: {'iterations': 3946, 'alpha': 1.891514115914586e-05, 'eta': 0.09074659422481823, 'C': 6.85734722292118e-08}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:14,190] Trial 316 finished with value: 0.643 and parameters: {'iterations': 4564, 'alpha': 3.361424568966869e-05, 'eta': 0.00041366930876839375, 'C': 1.0874877458089216e-06}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:14,522] Trial 317 finished with value: 0.67525 and parameters: {'iterations': 2799, 'alpha': 1.2808009845475997e-05, 'eta': 0.050577475070714174, 'C': 3.5761024084845893e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:14,935] Trial 318 finished with value: 0.585375 and parameters: {'iterations': 4267, 'alpha': 4.890914948602885e-05, 'eta': 0.15380669423836085, 'C': 1.3874162734974613e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:15,321] Trial 319 finished with value: 0.7025 and parameters: {'iterations': 3764, 'alpha': 6.855392023197307e-06, 'eta': 0.10456799832976289, 'C': 1.9920506943109922e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:15,772] Trial 320 finished with value: 0.714375 and parameters: {'iterations': 4981, 'alpha': 1.6707490532775658e-05, 'eta': 0.07163459838209076, 'C': 2.710647644490216e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:16,131] Trial 321 finished with value: 0.721375 and parameters: {'iterations': 3314, 'alpha': 2.421800999166441e-06, 'eta': 0.1879314099412319, 'C': 4.462800436280751e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:16,486] Trial 322 finished with value: 0.60025 and parameters: {'iterations': 3126, 'alpha': 3.926075935388118e-06, 'eta': 0.44513262308051005, 'C': 6.487439386660947e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:16,849] Trial 323 finished with value: 0.70425 and parameters: {'iterations': 3366, 'alpha': 8.552967116845114e-06, 'eta': 0.2594962204072676, 'C': 9.608565441935251e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:17,191] Trial 324 finished with value: 0.715625 and parameters: {'iterations': 3010, 'alpha': 3.158425209938389e-06, 'eta': 0.16702142442665557, 'C': 5.934139390183307e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:17,532] Trial 325 finished with value: 0.65825 and parameters: {'iterations': 3026, 'alpha': 2.3111302194126933e-06, 'eta': 0.1949708109096056, 'C': 4.945321401511974e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:17,871] Trial 326 finished with value: 0.70675 and parameters: {'iterations': 2898, 'alpha': 3.0072645413906232e-06, 'eta': 0.2811644422564536, 'C': 6.813585719287357e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:18,191] Trial 327 finished with value: 0.718625 and parameters: {'iterations': 2568, 'alpha': 2.845838351212637e-06, 'eta': 0.16691616439482093, 'C': 1.05588992486596e-06}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:18,532] Trial 328 finished with value: 0.722625 and parameters: {'iterations': 2778, 'alpha': 1.1412797597870017e-06, 'eta': 0.17375487706531742, 'C': 1.3076855075265094e-06}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:18,853] Trial 329 finished with value: 0.614875 and parameters: {'iterations': 2623, 'alpha': 9.441636423167687e-07, 'eta': 0.4112017054030708, 'C': 9.311159991470459e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:19,176] Trial 330 finished with value: 0.667875 and parameters: {'iterations': 2683, 'alpha': 2.179243010298426e-06, 'eta': 0.191869683179405, 'C': 2.04877044927971e-06}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:19,512] Trial 331 finished with value: 0.711625 and parameters: {'iterations': 2684, 'alpha': 1.3968803153097347e-06, 'eta': 0.300406059479387, 'C': 1.2028194278687003e-06}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:19,850] Trial 332 finished with value: 0.7125 and parameters: {'iterations': 2929, 'alpha': 5.31567169579063e-07, 'eta': 0.15998027840841178, 'C': 1.4301211050724807e-06}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:20,157] Trial 333 finished with value: 0.58675 and parameters: {'iterations': 2400, 'alpha': 2.5798146496522005e-06, 'eta': 0.23332314987321298, 'C': 1.1299765514452276e-05}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:20,513] Trial 334 finished with value: 0.70825 and parameters: {'iterations': 3279, 'alpha': 1.5103714443166217e-06, 'eta': 0.14209762007709198, 'C': 7.850626907542793e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:20,857] Trial 335 finished with value: 0.6725 and parameters: {'iterations': 2869, 'alpha': 6.230871376764655e-07, 'eta': 0.3679676261705487, 'C': 2.582016817745537e-06}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:21,210] Trial 336 finished with value: 0.667125 and parameters: {'iterations': 3077, 'alpha': 1.8139313773647467e-06, 'eta': 0.19284926391936158, 'C': 5.427385986957399e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:21,539] Trial 337 finished with value: 0.585 and parameters: {'iterations': 2668, 'alpha': 5.029403503870183e-06, 'eta': 0.49742488711007893, 'C': 5.751546884732144e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:21,938] Trial 338 finished with value: 0.715625 and parameters: {'iterations': 3903, 'alpha': 3.2937743961939057e-06, 'eta': 0.13457507529999957, 'C': 1.817367221294047e-06}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:22,335] Trial 339 finished with value: 0.7115 and parameters: {'iterations': 3748, 'alpha': 3.4226267891639425e-06, 'eta': 0.13083683267310584, 'C': 1.7034516381421302e-06}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:22,703] Trial 340 finished with value: 0.68475 and parameters: {'iterations': 3356, 'alpha': 1.1769666712116014e-06, 'eta': 0.26316738617328644, 'C': 3.0923029319329722e-06}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:23,057] Trial 341 finished with value: 0.689625 and parameters: {'iterations': 3179, 'alpha': 2.7574205641633674e-06, 'eta': 0.15672137397558755, 'C': 1.1120663941348593e-06}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:23,443] Trial 342 finished with value: 0.700375 and parameters: {'iterations': 3832, 'alpha': 3.903210973881378e-06, 'eta': 0.10764623382075411, 'C': 0.000262431992761732}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:23,744] Trial 343 finished with value: 0.706875 and parameters: {'iterations': 2215, 'alpha': 5.206209049813481e-06, 'eta': 0.20388220782930247, 'C': 1.5727896954102507e-06}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:24,191] Trial 344 finished with value: 0.59975 and parameters: {'iterations': 4999, 'alpha': 2.113994122016457e-06, 'eta': 4.370036692018112e-05, 'C': 1.0072626569502896e-06}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:24,512] Trial 345 finished with value: 0.709875 and parameters: {'iterations': 2523, 'alpha': 1.4660621628018704e-06, 'eta': 0.11639992891040503, 'C': 7.348898470350168e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:24,943] Trial 346 finished with value: 0.65675 and parameters: {'iterations': 4543, 'alpha': 8.620035534501666e-07, 'eta': 0.3429165897319199, 'C': 4.7173201611101236e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:25,338] Trial 347 finished with value: 0.596625 and parameters: {'iterations': 3953, 'alpha': 6.625762939245772e-06, 'eta': 1.0793925167423941e-05, 'C': 1.943811470865225e-06}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:25,722] Trial 348 finished with value: 0.68625 and parameters: {'iterations': 3538, 'alpha': 3.213013076018017e-06, 'eta': 0.1709954233409514, 'C': 2.9996961834292854e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:26,132] Trial 349 finished with value: 0.65425 and parameters: {'iterations': 4206, 'alpha': 5.8368436457603014e-06, 'eta': 0.09379937427314701, 'C': 8.468325967129582e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:26,463] Trial 350 finished with value: 0.68575 and parameters: {'iterations': 2831, 'alpha': 2.144006458324071e-06, 'eta': 0.2443106935041846, 'C': 0.004759695411871991}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:26,897] Trial 351 finished with value: 0.70625 and parameters: {'iterations': 4620, 'alpha': 3.830239660151938e-06, 'eta': 0.14395781355802903, 'C': 4.0417304273230903e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:27,253] Trial 352 finished with value: 0.695875 and parameters: {'iterations': 3218, 'alpha': 9.810635099932659e-06, 'eta': 0.002946475219783819, 'C': 2.675291303765038e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:27,652] Trial 353 finished with value: 0.714 and parameters: {'iterations': 3952, 'alpha': 7.3757534361849994e-06, 'eta': 0.08091814135452112, 'C': 6.579478530393911e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:28,025] Trial 354 finished with value: 0.611625 and parameters: {'iterations': 3532, 'alpha': 1.3451662113427591e-05, 'eta': 0.11621348412355355, 'C': 1.0461233215726314e-06}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:28,457] Trial 355 finished with value: 0.7285 and parameters: {'iterations': 4565, 'alpha': 3.042131814871687e-06, 'eta': 0.5869122751633804, 'C': 1.9436735859682075e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:28,889] Trial 356 finished with value: 0.671125 and parameters: {'iterations': 4687, 'alpha': 1.8086979191208635e-06, 'eta': 0.9924748135585966, 'C': 4.3889445728206245e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:29,341] Trial 357 finished with value: 0.63425 and parameters: {'iterations': 4973, 'alpha': 6.063412084569428e-06, 'eta': 0.617824659037542, 'C': 1.0056353324085628e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:29,792] Trial 358 finished with value: 0.61375 and parameters: {'iterations': 4543, 'alpha': 1.2052464579757166e-05, 'eta': 0.7133120672366702, 'C': 1.7748254330080407e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:30,230] Trial 359 finished with value: 0.67675 and parameters: {'iterations': 4278, 'alpha': 4.6231974691111825e-06, 'eta': 0.5875000397739024, 'C': 7.222859218794406e-08}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:30,642] Trial 360 finished with value: 0.71725 and parameters: {'iterations': 4102, 'alpha': 2.9276785689264564e-06, 'eta': 0.44230206282395684, 'C': 2.629165879118283e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:31,061] Trial 361 finished with value: 0.648875 and parameters: {'iterations': 4115, 'alpha': 2.4297671222620855e-06, 'eta': 0.3436447566373126, 'C': 2.3477622718442648e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:31,494] Trial 362 finished with value: 0.673625 and parameters: {'iterations': 4339, 'alpha': 1.6156446098182173e-06, 'eta': 0.4771438278077507, 'C': 3.0152090967638416e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:31,894] Trial 363 finished with value: 0.72725 and parameters: {'iterations': 3902, 'alpha': 3.433775109366554e-06, 'eta': 0.3648707716341467, 'C': 7.0250319315205e-06}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:32,344] Trial 364 finished with value: 0.552125 and parameters: {'iterations': 4997, 'alpha': 1.2830970881580357e-06, 'eta': 0.785221693124457, 'C': 5.0329015941062435e-06}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:32,790] Trial 365 finished with value: 0.6595 and parameters: {'iterations': 4577, 'alpha': 8.233679509560927e-07, 'eta': 0.4608282800732428, 'C': 1.8676223044184338e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:33,191] Trial 366 finished with value: 0.715375 and parameters: {'iterations': 4015, 'alpha': 4.027525045439915e-06, 'eta': 0.5355645382825979, 'C': 1.708976863966683e-05}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:33,607] Trial 367 finished with value: 0.724875 and parameters: {'iterations': 4329, 'alpha': 7.977052235925712e-06, 'eta': 0.35952918177380583, 'C': 3.225616452730023e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:34,057] Trial 368 finished with value: 0.718 and parameters: {'iterations': 4521, 'alpha': 7.8030350890347e-06, 'eta': 0.4096026553785255, 'C': 7.287237637184239e-06}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:34,487] Trial 369 finished with value: 0.66275 and parameters: {'iterations': 4543, 'alpha': 8.027440511108222e-06, 'eta': 0.6286419742033712, 'C': 2.7312349773518424e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:34,908] Trial 370 finished with value: 0.71375 and parameters: {'iterations': 4307, 'alpha': 1.85863031494987e-08, 'eta': 0.9856854596092722, 'C': 5.809577445820603e-06}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:35,342] Trial 371 finished with value: 0.698875 and parameters: {'iterations': 4604, 'alpha': 5.963548517382912e-06, 'eta': 0.43630104069335923, 'C': 1.6075049943826233e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:35,758] Trial 372 finished with value: 0.61225 and parameters: {'iterations': 4250, 'alpha': 9.021604222823842e-06, 'eta': 0.3524325263412137, 'C': 7.228391994955087e-06}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:36,193] Trial 373 finished with value: 0.684875 and parameters: {'iterations': 4655, 'alpha': 4.534801143662463e-06, 'eta': 0.35136708546721845, 'C': 9.835434663645222e-06}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:36,576] Trial 374 finished with value: 0.5915 and parameters: {'iterations': 3746, 'alpha': 2.95699320846692e-06, 'eta': 0.5632069968078626, 'C': 8.890147517863682}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:37,023] Trial 375 finished with value: 0.729625 and parameters: {'iterations': 4956, 'alpha': 9.372080071650508e-06, 'eta': 0.3088416001838624, 'C': 3.3572968306847363e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:37,435] Trial 376 finished with value: 0.716375 and parameters: {'iterations': 4227, 'alpha': 8.271472322342702e-06, 'eta': 0.33782673897322046, 'C': 3.745206446876417e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:37,615] Trial 377 finished with value: 0.62775 and parameters: {'iterations': 52, 'alpha': 1.298358785478061e-05, 'eta': 0.726011999694926, 'C': 3.591469661571922e-06}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:37,800] Trial 378 finished with value: 0.604625 and parameters: {'iterations': 66, 'alpha': 6.341835660001137e-06, 'eta': 0.4165523306370548, 'C': 3.4356220795750706e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:38,255] Trial 379 finished with value: 0.71425 and parameters: {'iterations': 4932, 'alpha': 2.3729572039751626e-06, 'eta': 0.28113970833098256, 'C': 2.392866672569183e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:38,715] Trial 380 finished with value: 0.638375 and parameters: {'iterations': 4993, 'alpha': 4.9660932596974435e-06, 'eta': 0.5131430003389837, 'C': 4.5546770455305753e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:39,117] Trial 381 finished with value: 0.707125 and parameters: {'iterations': 4013, 'alpha': 1.767953931638408e-05, 'eta': 0.270362573013543, 'C': 2.822216211190404e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:39,496] Trial 382 finished with value: 0.64975 and parameters: {'iterations': 3642, 'alpha': 8.797617983826137e-06, 'eta': 0.7698432685538537, 'C': 3.7751168891769686e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:39,912] Trial 383 finished with value: 0.664375 and parameters: {'iterations': 4369, 'alpha': 2.2932749937880243e-07, 'eta': 0.2876604425359715, 'C': 1.4593396972173187e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:40,313] Trial 384 finished with value: 0.584375 and parameters: {'iterations': 3969, 'alpha': 1.0985934100166832e-05, 'eta': 8.384803007053209e-08, 'C': 5.508966076541703e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:40,754] Trial 385 finished with value: 0.6895 and parameters: {'iterations': 4642, 'alpha': 3.6097027575124893e-06, 'eta': 0.36421318950377285, 'C': 2.1990590757105065e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:41,172] Trial 386 finished with value: 0.731 and parameters: {'iterations': 4228, 'alpha': 6.495051796369739e-06, 'eta': 0.23412237707906533, 'C': 1.1618078242095676e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:41,616] Trial 387 finished with value: 0.65475 and parameters: {'iterations': 4643, 'alpha': 6.4252369405400365e-06, 'eta': 0.22599325855501223, 'C': 8.98787584010579e-08}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:42,086] Trial 388 finished with value: 0.5915 and parameters: {'iterations': 4995, 'alpha': 1.7116530967430532e-05, 'eta': 0.23034208580646995, 'C': 1.2689259284316254}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:42,501] Trial 389 finished with value: 0.638125 and parameters: {'iterations': 3710, 'alpha': 1.2976043328063352e-05, 'eta': 0.2354608546246249, 'C': 4.0260651864487474e-08}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:42,919] Trial 390 finished with value: 0.631125 and parameters: {'iterations': 4455, 'alpha': 8.813390229513601e-06, 'eta': 0.29409604721600086, 'C': 9.763437969082887e-08}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:43,332] Trial 391 finished with value: 0.701875 and parameters: {'iterations': 4189, 'alpha': 5.053264482738332e-06, 'eta': 0.20198386732811963, 'C': 1.431098887894883e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:43,717] Trial 392 finished with value: 0.65275 and parameters: {'iterations': 3732, 'alpha': 2.0176696265564628e-05, 'eta': 0.42965461256214266, 'C': 5.5497819557748515e-08}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:44,169] Trial 393 finished with value: 0.685375 and parameters: {'iterations': 4998, 'alpha': 2.3600998354225568e-05, 'eta': 0.20437791834655433, 'C': 8.220430783941503e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:44,536] Trial 394 finished with value: 0.739375 and parameters: {'iterations': 3431, 'alpha': 8.191253230500681e-06, 'eta': 0.31640737395143964, 'C': 1.2099809498930715e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:44,916] Trial 395 finished with value: 0.665375 and parameters: {'iterations': 3572, 'alpha': 6.281551658023926e-06, 'eta': 0.40217575212001516, 'C': 1.1539543686175581e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:45,283] Trial 396 finished with value: 0.542625 and parameters: {'iterations': 3270, 'alpha': 9.704314366459231e-06, 'eta': 0.3076210359665402, 'C': 7.282546677588873e-08}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:45,644] Trial 397 finished with value: 0.653625 and parameters: {'iterations': 3295, 'alpha': 4.656306596118071e-06, 'eta': 0.5139259480064958, 'C': 4.6640534538170307e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:46,045] Trial 398 finished with value: 0.692 and parameters: {'iterations': 3934, 'alpha': 8.397032493497103e-06, 'eta': 0.2653619693025552, 'C': 1.2369834362065924e-07}. Best is trial 176 with value: 0.7445.
[I 2022-10-17 06:40:46,418] Trial 399 finished with value: 0.61275 and parameters: {'iterations': 3536, 'alpha': 1.940869312293805e-06, 'eta': 0.683665554338094, 'C': 1.2805302095031753e-06}. Best is trial 176 with value: 0.7445.
Best trial:
  Value: 0.7445
  Params: 
    C: 0.00010004557227399993
    alpha: 0.00010732574494827194
    eta: 0.331433832469533
    iterations: 3568
CPU times: user 19min 11s, sys: 12min 18s, total: 31min 30s
Wall time: 2min 8s
In [ ]:
 
In [144]:
study_name="StochasticLogisticRegressionMulit"
storage_name = "sqlite:///{}.db".format(study_name)
study = optuna.load_study(study_name=study_name, storage=storage_name)
trial = study.best_trial
best_params = study.best_params
best_params["do_C_alpha"] = True
clf = StochasticLogisticRegressionMulit(**best_params)
clf.fit(x_train.to_numpy(),y_train.to_numpy())
yhat  = clf.predict_proba(x_test1)
ytrainhat = clf.predict_proba(x_train)
yvalhat = clf.predict_proba(x_train1)

plt.figure(figsize=(15,15))
plt.subplot(3,2,1)
plot_sigbkg(0,"GALAXY") 
plt.subplot(3,2,2)
plot_roc(0,"GALAXY") 
# plt.show()
plt.subplot(323)
plot_sigbkg(1,"QSO") 
plt.subplot(324)
plot_roc(1,"QSO")  
# plt.show()
plt.subplot(325)
plot_sigbkg(2,"STAR")
plt.subplot(326)
plot_roc(2,"STAR") 
plt.show()
In [139]:
study_name="StochasticLogisticRegressionMulit"
storage_name = "sqlite:///{}.db".format(study_name)
study = optuna.load_study(study_name=study_name, storage=storage_name)
plot_optimization_history(study).show()
In [140]:
plot_slice(study)
In [141]:
plot_contour(study, params=['C','alpha']).show()
In [142]:
plot_contour(study, params=['C','iterations']).show()
In [143]:
plot_contour(study, params=['alpha','iterations']).show()
In [144]:
plot_contour(study, params=['eta','iterations']).show()
In [145]:
plot_contour(study, params=['eta','C']).show()
In [146]:
plot_contour(study, params=['eta','alpha']).show()

Optmizing the Hessian Algorithm with loglikelihood.

In [148]:
%%time
study_name = "HessianBinaryLogisticRegressionMulit"  # Unique identifier of the study.
CV_RESULT_DIR = os.getcwd()+f"/{study_name}/"
if not os.path.exists(CV_RESULT_DIR):  os.mkdir(CV_RESULT_DIR)
storage_name = "sqlite:///{}.db".format(study_name)

def objective(trial):
    param = {
        "iterations": trial.suggest_int("iterations", 2, 300, log=True),
        "alpha": trial.suggest_float("alpha", 1e-8, 10., log=True),
        "eta" : trial.suggest_float("eta", 1e-8, 1.0, log=True),
        "C": trial.suggest_float("C", 1e-8, 10, log=True),
        "do_C_alpha": True
            }
    clf = HessianBinaryLogisticRegressionMulit(**param)
    clf.fit(x_train1.to_numpy(),y_train1.to_numpy())
    yhat = clf.predict(x_test1)
    acc = accuracy_score(y_test1,yhat)
    return acc


# pruner = optuna.pruners.MedianPruner(n_warmup_steps=5)
# pruner = optuna.pruners.HyperbandPruner()
study = optuna.create_study(direction="maximize",storage=storage_name,study_name=study_name)
study.optimize(objective, n_trials=40)

print("Best trial:")
trial = study.best_trial

print("  Value: {}".format(trial.value))

print("  Params: ")
for key, value in trial.params.items():
    print("    {}: {}".format(key, value))
[I 2022-10-17 07:10:58,607] A new study created in RDB with name: HessianBinaryLogisticRegressionMulit
[I 2022-10-17 07:11:01,186] Trial 0 finished with value: 0.756 and parameters: {'iterations': 6, 'alpha': 0.1809166000653168, 'eta': 0.0002893508535402082, 'C': 0.26549221353874297}. Best is trial 0 with value: 0.756.
[I 2022-10-17 07:11:02,549] Trial 1 finished with value: 0.752 and parameters: {'iterations': 3, 'alpha': 6.668077089201765e-05, 'eta': 0.00039571254560482694, 'C': 7.132813334257169e-07}. Best is trial 0 with value: 0.756.
[I 2022-10-17 07:11:05,878] Trial 2 finished with value: 0.7525 and parameters: {'iterations': 8, 'alpha': 0.0009928485395073803, 'eta': 0.007420557638132359, 'C': 1.59768618123995e-07}. Best is trial 0 with value: 0.756.
[I 2022-10-17 07:11:11,201] Trial 3 finished with value: 0.6955 and parameters: {'iterations': 13, 'alpha': 0.004521172678459179, 'eta': 4.1642402217624674e-08, 'C': 2.0569121334840683}. Best is trial 0 with value: 0.756.
[I 2022-10-17 07:11:14,124] Trial 4 finished with value: 0.752 and parameters: {'iterations': 7, 'alpha': 4.888844868664669e-08, 'eta': 7.467739782833443e-07, 'C': 6.048272206478038e-08}. Best is trial 0 with value: 0.756.
[I 2022-10-17 07:11:15,859] Trial 5 finished with value: 0.752 and parameters: {'iterations': 4, 'alpha': 1.396589395787286e-08, 'eta': 0.0002450912358085901, 'C': 0.0012424290907457102}. Best is trial 0 with value: 0.756.
[I 2022-10-17 07:11:37,072] Trial 6 finished with value: 0.755 and parameters: {'iterations': 53, 'alpha': 7.705142285215653e-06, 'eta': 0.08678493970034086, 'C': 4.710830243498148e-06}. Best is trial 0 with value: 0.756.
[I 2022-10-17 07:11:38,802] Trial 7 finished with value: 0.752 and parameters: {'iterations': 4, 'alpha': 3.6321111309992404e-08, 'eta': 6.791131584146555e-07, 'C': 3.302008019304491e-08}. Best is trial 0 with value: 0.756.
[I 2022-10-17 07:11:47,521] Trial 8 finished with value: 0.753 and parameters: {'iterations': 21, 'alpha': 0.0010581287434043049, 'eta': 5.9551739879341977e-05, 'C': 0.060407995698473986}. Best is trial 0 with value: 0.756.
[I 2022-10-17 07:11:50,524] Trial 9 finished with value: 0.755 and parameters: {'iterations': 7, 'alpha': 0.7940863099125441, 'eta': 8.61161116257374e-06, 'C': 0.19999583788133163}. Best is trial 0 with value: 0.756.
[I 2022-10-17 07:13:25,654] Trial 10 finished with value: 0.754 and parameters: {'iterations': 232, 'alpha': 1.0770897646505113, 'eta': 0.13786071918800685, 'C': 0.0009047470854583304}. Best is trial 0 with value: 0.756.
[I 2022-10-17 07:13:56,358] Trial 11 finished with value: 0.756 and parameters: {'iterations': 75, 'alpha': 4.915603727552889e-06, 'eta': 0.48666189512803276, 'C': 1.2296771862828527e-05}. Best is trial 0 with value: 0.756.
[I 2022-10-17 07:14:21,474] Trial 12 finished with value: 0.7555 and parameters: {'iterations': 62, 'alpha': 0.04388376984432191, 'eta': 0.6352517309033298, 'C': 3.349779070463113e-05}. Best is trial 0 with value: 0.756.
[I 2022-10-17 07:14:51,109] Trial 13 finished with value: 0.755 and parameters: {'iterations': 74, 'alpha': 2.9080864698572334e-06, 'eta': 0.007444998647592055, 'C': 0.015622881825358965}. Best is trial 0 with value: 0.756.
[I 2022-10-17 07:16:04,586] Trial 14 finished with value: 0.7555 and parameters: {'iterations': 184, 'alpha': 0.04177605834172401, 'eta': 0.003386832311638051, 'C': 5.900815625452824e-05}. Best is trial 0 with value: 0.756.
[I 2022-10-17 07:16:17,125] Trial 15 finished with value: 0.3935 and parameters: {'iterations': 31, 'alpha': 8.63943561513274, 'eta': 2.0972823328901324e-05, 'C': 5.501491363550158}. Best is trial 0 with value: 0.756.
[I 2022-10-17 07:16:18,080] Trial 16 finished with value: 0.7585 and parameters: {'iterations': 2, 'alpha': 8.66088594873072e-07, 'eta': 0.692303901613639, 'C': 0.0018553238808001942}. Best is trial 16 with value: 0.7585.
[I 2022-10-17 07:16:19,038] Trial 17 finished with value: 0.757 and parameters: {'iterations': 2, 'alpha': 1.1554621375154316e-06, 'eta': 0.6152462251159775, 'C': 0.004849683453906376}. Best is trial 16 with value: 0.7585.
[I 2022-10-17 07:16:19,994] Trial 18 finished with value: 0.7525 and parameters: {'iterations': 2, 'alpha': 4.022524854062258e-07, 'eta': 0.024559804954313147, 'C': 0.004758912986567399}. Best is trial 16 with value: 0.7585.
[I 2022-10-17 07:16:20,948] Trial 19 finished with value: 0.7585 and parameters: {'iterations': 2, 'alpha': 3.6299687181889206e-07, 'eta': 0.6984336643389221, 'C': 0.0004461156468236103}. Best is trial 16 with value: 0.7585.
[I 2022-10-17 07:16:21,918] Trial 20 finished with value: 0.752 and parameters: {'iterations': 2, 'alpha': 3.7964535624879686e-05, 'eta': 0.0018982982955685732, 'C': 0.0001867580602921972}. Best is trial 16 with value: 0.7585.
[I 2022-10-17 07:16:22,891] Trial 21 finished with value: 0.759 and parameters: {'iterations': 2, 'alpha': 4.6822927838166577e-07, 'eta': 0.7564409359068404, 'C': 0.005581060086724471}. Best is trial 21 with value: 0.759.
[I 2022-10-17 07:16:24,275] Trial 22 finished with value: 0.753 and parameters: {'iterations': 3, 'alpha': 3.73894913067781e-07, 'eta': 0.06996304237942068, 'C': 0.0010699640453024897}. Best is trial 21 with value: 0.759.
[I 2022-10-17 07:16:29,332] Trial 23 finished with value: 0.7545 and parameters: {'iterations': 12, 'alpha': 2.2182873225480781e-07, 'eta': 0.03511749493586516, 'C': 0.00026381566646262655}. Best is trial 21 with value: 0.759.
[I 2022-10-17 07:16:31,126] Trial 24 finished with value: 0.7555 and parameters: {'iterations': 4, 'alpha': 2.3196111329842917e-05, 'eta': 0.16874607938004385, 'C': 0.023141652726330585}. Best is trial 21 with value: 0.759.
[I 2022-10-17 07:16:32,099] Trial 25 finished with value: 0.7565 and parameters: {'iterations': 2, 'alpha': 1.859582314353831e-07, 'eta': 0.8740448574993066, 'C': 0.006219560564160624}. Best is trial 21 with value: 0.759.
[I 2022-10-17 07:16:33,473] Trial 26 finished with value: 0.7525 and parameters: {'iterations': 3, 'alpha': 1.0028405193329285e-08, 'eta': 0.021012715342770723, 'C': 9.273099374014875e-05}. Best is trial 21 with value: 0.759.
[I 2022-10-17 07:16:35,659] Trial 27 finished with value: 0.756 and parameters: {'iterations': 5, 'alpha': 0.00013638170233670185, 'eta': 0.16222142845822243, 'C': 5.325932950978443e-06}. Best is trial 21 with value: 0.759.
[I 2022-10-17 07:16:40,665] Trial 28 finished with value: 0.752 and parameters: {'iterations': 12, 'alpha': 1.5077613064584829e-06, 'eta': 0.2526359167867729, 'C': 0.19789360531188246}. Best is trial 21 with value: 0.759.
[I 2022-10-17 07:16:42,042] Trial 29 finished with value: 0.752 and parameters: {'iterations': 3, 'alpha': 7.778202468007114e-08, 'eta': 0.0014430469622181178, 'C': 0.0010144787920848444}. Best is trial 21 with value: 0.759.
[I 2022-10-17 07:16:44,208] Trial 30 finished with value: 0.7545 and parameters: {'iterations': 5, 'alpha': 8.414803311393153e-06, 'eta': 0.9571500999198124, 'C': 0.04508994842232997}. Best is trial 21 with value: 0.759.
[I 2022-10-17 07:16:45,176] Trial 31 finished with value: 0.7545 and parameters: {'iterations': 2, 'alpha': 1.1710290793052182e-06, 'eta': 0.9941082432059205, 'C': 0.004395373024657894}. Best is trial 21 with value: 0.759.
[I 2022-10-17 07:16:46,179] Trial 32 finished with value: 0.7525 and parameters: {'iterations': 2, 'alpha': 1.1773517392581188e-06, 'eta': 0.04803243568649276, 'C': 0.004559797090429581}. Best is trial 21 with value: 0.759.
[I 2022-10-17 07:16:47,617] Trial 33 finished with value: 0.7575 and parameters: {'iterations': 3, 'alpha': 7.071908221459851e-07, 'eta': 0.3332942187790445, 'C': 0.0004761902419714211}. Best is trial 21 with value: 0.759.
[I 2022-10-17 07:16:49,039] Trial 34 finished with value: 0.752 and parameters: {'iterations': 3, 'alpha': 1.7190180939699946e-05, 'eta': 0.011151093138494194, 'C': 1.3671830324076766e-06}. Best is trial 21 with value: 0.759.
[I 2022-10-17 07:16:52,972] Trial 35 finished with value: 0.7535 and parameters: {'iterations': 9, 'alpha': 0.00017930060706729468, 'eta': 0.2859136049282087, 'C': 0.0005662331038122955}. Best is trial 21 with value: 0.759.
[I 2022-10-17 07:16:55,120] Trial 36 finished with value: 0.752 and parameters: {'iterations': 5, 'alpha': 9.615383767546238e-08, 'eta': 1.1115433032478383e-08, 'C': 0.000142938423355532}. Best is trial 21 with value: 0.759.
[I 2022-10-17 07:16:56,480] Trial 37 finished with value: 0.753 and parameters: {'iterations': 3, 'alpha': 4.999607054491799e-07, 'eta': 0.06225573289946811, 'C': 3.386769268856207e-05}. Best is trial 21 with value: 0.759.
[I 2022-10-17 07:17:04,758] Trial 38 finished with value: 0.7215 and parameters: {'iterations': 20, 'alpha': 2.4230920749342534e-08, 'eta': 0.2665578957830507, 'C': 1.485164293775392}. Best is trial 21 with value: 0.759.
[I 2022-10-17 07:17:06,514] Trial 39 finished with value: 0.7525 and parameters: {'iterations': 4, 'alpha': 9.613552073511407e-08, 'eta': 0.00046608975494273803, 'C': 0.0018723628745640152}. Best is trial 21 with value: 0.759.
Best trial:
  Value: 0.759
  Params: 
    C: 0.005581060086724471
    alpha: 4.6822927838166577e-07
    eta: 0.7564409359068404
    iterations: 2
CPU times: user 2h 10min 57s, sys: 1h 28min 36s, total: 3h 39min 34s
Wall time: 6min 8s
In [149]:
study_name="HessianBinaryLogisticRegressionMulit"
storage_name = "sqlite:///{}.db".format(study_name)
study = optuna.load_study(study_name=study_name, storage=storage_name)
trial = study.best_trial
best_params = study.best_params
best_params["do_C_alpha"] = True
clf = StochasticLogisticRegressionMulit(**best_params)
clf.fit(x_train.to_numpy(),y_train.to_numpy())
yhat  = clf.predict_proba(x_test1)
ytrainhat = clf.predict_proba(x_train)
yvalhat = clf.predict_proba(x_train1)

plt.figure(figsize=(15,15))
plt.subplot(3,2,1)
plot_sigbkg(0,"GALAXY") 
plt.subplot(3,2,2)
plot_roc(0,"GALAXY") 
# plt.show()
plt.subplot(323)
plot_sigbkg(1,"QSO") 
plt.subplot(324)
plot_roc(1,"QSO")  
# plt.show()
plt.subplot(325)
plot_sigbkg(2,"STAR")
plt.subplot(326)
plot_roc(2,"STAR") 
plt.show()
In [147]:
study_name="HessianBinaryLogisticRegressionMulit"
storage_name = "sqlite:///{}.db".format(study_name)
study = optuna.load_study(study_name=study_name, storage=storage_name)
plot_optimization_history(study).show()
In [148]:
plot_slice(study)
In [149]:
plot_contour(study, params=['C','alpha']).show()
In [150]:
plot_contour(study, params=['C','iterations']).show()
In [151]:
plot_contour(study, params=['alpha','iterations']).show()
In [152]:
plot_contour(study, params=['C','eta']).show()
In [153]:
plot_contour(study, params=['alpha','eta']).show()

MSE version of Linesearch

In [164]:
%%time
study_name = "LineSearchLogisticMSERegression"  # Unique identifier of the study.
CV_RESULT_DIR = os.getcwd()+f"/{study_name}/"
if not os.path.exists(CV_RESULT_DIR):  os.mkdir(CV_RESULT_DIR)
storage_name = "sqlite:///{}.db".format(study_name)

def objective(trial):
    param = {
        "iterations": 20,
        "line_iters": trial.suggest_int("line_iters", 2, 100, log=True),
        "alpha": trial.suggest_float("alpha", 1e-8, 10., log=True),
#         "eta" : trial.suggest_float("eta", 1e-8, 1.0, log=True),
        "C": trial.suggest_float("C", 1e-8, 10, log=True),
        "do_C_alpha": True,
#         'sample_weight': True,
            }
    clf = LineSearchLogisticMSERegressionMulit(**param,eta=0)
    clf.fit(x_train1.to_numpy(),y_train1.to_numpy())
    yhat = clf.predict(x_test1.to_numpy())
    acc = accuracy_score(y_test1,yhat)
    return acc


# pruner = optuna.pruners.MedianPruner(n_warmup_steps=5)
# pruner = optuna.pruners.HyperbandPruner()
study = optuna.create_study(direction="maximize",storage=storage_name,study_name=study_name)
study.optimize(objective, n_trials=40)

print("Best trial:")
trial = study.best_trial

print("  Value: {}".format(trial.value))

print("  Params: ")
for key, value in trial.params.items():
    print("    {}: {}".format(key, value))
[I 2022-10-17 07:56:05,500] A new study created in RDB with name: LineSearchLogisticMSERegression
[I 2022-10-17 07:56:07,486] Trial 0 finished with value: 0.586 and parameters: {'line_iters': 35, 'alpha': 0.0014440605935876435, 'C': 0.00029274873124114235}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:10,344] Trial 1 finished with value: 0.586 and parameters: {'line_iters': 51, 'alpha': 0.0007470642720501014, 'C': 2.7356530630200386}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:10,724] Trial 2 finished with value: 0.586 and parameters: {'line_iters': 5, 'alpha': 1.2670892707327658e-08, 'C': 5.980743561337255e-05}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:11,162] Trial 3 finished with value: 0.586 and parameters: {'line_iters': 6, 'alpha': 0.0778502085307415, 'C': 0.04768785780833104}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:12,954] Trial 4 finished with value: 0.586 and parameters: {'line_iters': 32, 'alpha': 0.2819355741357072, 'C': 0.000891642517494784}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:13,698] Trial 5 finished with value: 0.586 and parameters: {'line_iters': 12, 'alpha': 9.538570501266482e-06, 'C': 2.2812943655444863e-05}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:14,808] Trial 6 finished with value: 0.586 and parameters: {'line_iters': 19, 'alpha': 2.0298689066942352e-07, 'C': 0.5580742983995792}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:19,776] Trial 7 finished with value: 0.586 and parameters: {'line_iters': 92, 'alpha': 0.00016372656800194086, 'C': 1.7416110675788793e-06}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:20,002] Trial 8 finished with value: 0.586 and parameters: {'line_iters': 2, 'alpha': 1.4866062713224708e-08, 'C': 0.001797994097240162}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:20,279] Trial 9 finished with value: 0.586 and parameters: {'line_iters': 3, 'alpha': 4.983672467669182, 'C': 0.003740095978409903}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:25,617] Trial 10 finished with value: 0.586 and parameters: {'line_iters': 99, 'alpha': 0.0030888843961031976, 'C': 4.2766771889814555e-08}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:28,018] Trial 11 finished with value: 0.586 and parameters: {'line_iters': 42, 'alpha': 0.0008066760158980291, 'C': 5.972896414856554}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:30,462] Trial 12 finished with value: 0.586 and parameters: {'line_iters': 44, 'alpha': 1.860690281518567e-05, 'C': 0.10723868332015025}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:31,386] Trial 13 finished with value: 0.586 and parameters: {'line_iters': 15, 'alpha': 0.01895231475606203, 'C': 1.5461956282894485e-06}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:34,506] Trial 14 finished with value: 0.586 and parameters: {'line_iters': 57, 'alpha': 4.1552179778010795e-05, 'C': 8.212488752109257}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:35,947] Trial 15 finished with value: 0.586 and parameters: {'line_iters': 25, 'alpha': 0.0019766137488317864, 'C': 0.01730387737342208}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:36,605] Trial 16 finished with value: 0.586 and parameters: {'line_iters': 10, 'alpha': 1.3196961250162714e-06, 'C': 2.739979108240032e-05}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:39,831] Trial 17 finished with value: 0.586 and parameters: {'line_iters': 59, 'alpha': 5.2502346283642514e-05, 'C': 0.00013748153221633975}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:41,340] Trial 18 finished with value: 0.586 and parameters: {'line_iters': 26, 'alpha': 2.596541938057707, 'C': 0.005205690316817154}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:41,996] Trial 19 finished with value: 0.586 and parameters: {'line_iters': 10, 'alpha': 1.439499931118861e-06, 'C': 3.0322203752333574e-06}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:45,763] Trial 20 finished with value: 0.586 and parameters: {'line_iters': 69, 'alpha': 0.008186123851050128, 'C': 0.00022879125254986194}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:47,265] Trial 21 finished with value: 0.586 and parameters: {'line_iters': 26, 'alpha': 6.568226124134527, 'C': 0.007806185247975874}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:47,817] Trial 22 finished with value: 0.586 and parameters: {'line_iters': 8, 'alpha': 1.6404685284355092e-06, 'C': 1.0418997278239548e-06}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:51,789] Trial 23 finished with value: 0.586 and parameters: {'line_iters': 73, 'alpha': 0.03185869633084647, 'C': 1.0195805155566663e-07}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:53,669] Trial 24 finished with value: 0.586 and parameters: {'line_iters': 33, 'alpha': 1.2043838734953618, 'C': 0.00044790502366538363}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:54,174] Trial 25 finished with value: 0.586 and parameters: {'line_iters': 7, 'alpha': 0.2892530724045152, 'C': 1.2348667447191631e-07}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:54,492] Trial 26 finished with value: 0.586 and parameters: {'line_iters': 3, 'alpha': 0.044204574831110593, 'C': 1.1240945328535248e-08}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:56,594] Trial 27 finished with value: 0.586 and parameters: {'line_iters': 37, 'alpha': 0.6154620249093836, 'C': 1.2473679963853232e-05}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:57,682] Trial 28 finished with value: 0.586 and parameters: {'line_iters': 18, 'alpha': 0.3512413192320753, 'C': 0.0005684810865893982}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:58,028] Trial 29 finished with value: 0.586 and parameters: {'line_iters': 4, 'alpha': 0.08956915392857609, 'C': 1.48053492618216e-08}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:58,265] Trial 30 finished with value: 0.586 and parameters: {'line_iters': 2, 'alpha': 0.00044182039227419126, 'C': 5.26538613377489e-06}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:59,250] Trial 31 finished with value: 0.586 and parameters: {'line_iters': 16, 'alpha': 0.6727702952484568, 'C': 3.3261543043740105e-07}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:59,596] Trial 32 finished with value: 0.586 and parameters: {'line_iters': 4, 'alpha': 0.1955105050466318, 'C': 1.0268167698872615e-05}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:56:59,836] Trial 33 finished with value: 0.586 and parameters: {'line_iters': 2, 'alpha': 0.0006199356613333805, 'C': 1.0474759438229016e-08}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:57:00,183] Trial 34 finished with value: 0.586 and parameters: {'line_iters': 4, 'alpha': 0.007207931816184703, 'C': 2.401496338081806e-07}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:57:01,640] Trial 35 finished with value: 0.586 and parameters: {'line_iters': 25, 'alpha': 3.532047913725698e-06, 'C': 0.00014125056434764635}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:57:05,537] Trial 36 finished with value: 0.586 and parameters: {'line_iters': 71, 'alpha': 1.205743632350333e-07, 'C': 0.13705745104349895}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:57:06,043] Trial 37 finished with value: 0.586 and parameters: {'line_iters': 7, 'alpha': 0.00013610097180624689, 'C': 0.010840716929046463}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:57:07,292] Trial 38 finished with value: 0.586 and parameters: {'line_iters': 21, 'alpha': 0.019821687351570232, 'C': 5.517507471431232e-07}. Best is trial 0 with value: 0.586.
[I 2022-10-17 07:57:08,010] Trial 39 finished with value: 0.586 and parameters: {'line_iters': 11, 'alpha': 2.0225288305049196e-07, 'C': 1.0107355728258152}. Best is trial 0 with value: 0.586.
Best trial:
  Value: 0.586
  Params: 
    C: 0.00029274873124114235
    alpha: 0.0014440605935876435
    line_iters: 35
CPU times: user 20min 38s, sys: 16min 38s, total: 37min 16s
Wall time: 1min 2s
In [ ]:
%%time
study_name = "LineSearchLogisticMSERegressiona"  # Unique identifier of the study.
CV_RESULT_DIR = os.getcwd()+f"/{study_name}/"
if not os.path.exists(CV_RESULT_DIR):  os.mkdir(CV_RESULT_DIR)
storage_name = "sqlite:///{}.db".format(study_name)

def objective(trial):
    param = {
        "iterations": 20,
        "line_iters": trial.suggest_int("line_iters", 2, 100, log=True),
        "alpha": trial.suggest_float("alpha", 1e-8, 10., log=True),
        "eta" : trial.suggest_float("eta", 1e-8, 1.0, log=True),
#         "C": trial.suggest_float("C", 1e-8, 10, log=True),
        "do_alpha": True,
#         'sample_weight': True,
            }
    clf = LineSearchLogisticMSERegressionMulit(**param)
    clf.fit(x_train1.to_numpy(),y_train1.to_numpy())
    yhat = clf.predict(x_test1.to_numpy())
    acc = accuracy_score(y_test1,yhat)

    return acc


# pruner = optuna.pruners.MedianPruner(n_warmup_steps=5)
# pruner = optuna.pruners.HyperbandPruner()
study = optuna.create_study(direction="maximize",storage=storage_name,study_name=study_name)
study.optimize(objective, n_trials=40)

print("Best trial:")
trial = study.best_trial

print("  Value: {}".format(trial.value))

print("  Params: ")
for key, value in trial.params.items():
    print("    {}: {}".format(key, value))
[I 2022-10-17 08:05:03,788] A new study created in RDB with name: LineSearchLogisticMSERegressiona
[I 2022-10-17 08:05:07,275] Trial 0 finished with value: 0.081 and parameters: {'line_iters': 6, 'alpha': 8.195423582288986e-08, 'eta': 0.0013101628720596907}. Best is trial 0 with value: 0.081.
[I 2022-10-17 08:05:14,062] Trial 1 finished with value: 0.081 and parameters: {'line_iters': 12, 'alpha': 4.035847124290851e-05, 'eta': 1.2663840407301795e-05}. Best is trial 0 with value: 0.081.
[I 2022-10-17 08:05:18,094] Trial 2 finished with value: 0.081 and parameters: {'line_iters': 7, 'alpha': 0.003915791417753053, 'eta': 0.009354055632210602}. Best is trial 0 with value: 0.081.
[I 2022-10-17 08:05:19,337] Trial 3 finished with value: 0.081 and parameters: {'line_iters': 2, 'alpha': 1.6330455001221186e-07, 'eta': 0.0028297409913561893}. Best is trial 0 with value: 0.081.
[I 2022-10-17 08:05:23,919] Trial 4 finished with value: 0.3405 and parameters: {'line_iters': 8, 'alpha': 6.321292584884093, 'eta': 1.8825354338421578e-07}. Best is trial 4 with value: 0.3405.
[I 2022-10-17 08:06:12,838] Trial 5 finished with value: 0.083 and parameters: {'line_iters': 87, 'alpha': 0.04947986195133895, 'eta': 0.0003513402459724548}. Best is trial 4 with value: 0.3405.
[I 2022-10-17 08:06:14,074] Trial 6 finished with value: 0.081 and parameters: {'line_iters': 2, 'alpha': 2.076341316537541e-06, 'eta': 0.001213063045615807}. Best is trial 4 with value: 0.3405.
[I 2022-10-17 08:06:16,971] Trial 7 finished with value: 0.081 and parameters: {'line_iters': 5, 'alpha': 3.7340279034311563, 'eta': 0.9158922000189262}. Best is trial 4 with value: 0.3405.
[I 2022-10-17 08:06:18,764] Trial 8 finished with value: 0.082 and parameters: {'line_iters': 3, 'alpha': 0.025631290715052404, 'eta': 1.3713169141944707e-05}. Best is trial 4 with value: 0.3405.
[I 2022-10-17 08:06:23,908] Trial 9 finished with value: 0.081 and parameters: {'line_iters': 9, 'alpha': 5.160319138370617e-06, 'eta': 0.0008917279124892209}. Best is trial 4 with value: 0.3405.
[I 2022-10-17 08:06:40,097] Trial 10 finished with value: 0.172 and parameters: {'line_iters': 29, 'alpha': 4.402131024810917, 'eta': 2.413921475409237e-08}. Best is trial 4 with value: 0.3405.
[I 2022-10-17 08:06:57,967] Trial 11 finished with value: 0.2015 and parameters: {'line_iters': 32, 'alpha': 7.27109910007038, 'eta': 2.2814713892901635e-08}. Best is trial 4 with value: 0.3405.
[I 2022-10-17 08:07:14,252] Trial 12 finished with value: 0.088 and parameters: {'line_iters': 29, 'alpha': 0.41715008120086267, 'eta': 3.006432418609896e-08}. Best is trial 4 with value: 0.3405.
[I 2022-10-17 08:07:28,287] Trial 13 finished with value: 0.0425 and parameters: {'line_iters': 25, 'alpha': 8.424328094679709, 'eta': 3.014672335879844e-07}. Best is trial 4 with value: 0.3405.
[I 2022-10-17 08:08:06,952] Trial 14 finished with value: 0.081 and parameters: {'line_iters': 69, 'alpha': 0.002942339050666894, 'eta': 7.817687128165801e-07}. Best is trial 4 with value: 0.3405.
[I 2022-10-17 08:08:35,450] Trial 15 finished with value: 0.093 and parameters: {'line_iters': 51, 'alpha': 0.30652131726575327, 'eta': 6.160092331511701e-07}. Best is trial 4 with value: 0.3405.
[I 2022-10-17 08:08:45,083] Trial 16 finished with value: 0.081 and parameters: {'line_iters': 17, 'alpha': 0.0007463658104576257, 'eta': 1.6066166002614327e-08}. Best is trial 4 with value: 0.3405.
[I 2022-10-17 08:08:54,085] Trial 17 finished with value: 0.0805 and parameters: {'line_iters': 16, 'alpha': 0.4815904965459957, 'eta': 8.466285836739479e-06}. Best is trial 4 with value: 0.3405.
[I 2022-10-17 08:09:16,528] Trial 18 finished with value: 0.083 and parameters: {'line_iters': 40, 'alpha': 0.04654610511043809, 'eta': 1.294802452511054e-07}. Best is trial 4 with value: 0.3405.
[I 2022-10-17 08:09:18,879] Trial 19 finished with value: 0.081 and parameters: {'line_iters': 4, 'alpha': 7.419490652476777e-05, 'eta': 2.4653285917403775e-06}. Best is trial 4 with value: 0.3405.
[I 2022-10-17 08:09:25,153] Trial 20 finished with value: 0.4255 and parameters: {'line_iters': 11, 'alpha': 0.9977451371187314, 'eta': 6.360573474217365e-05}. Best is trial 20 with value: 0.4255.
[I 2022-10-17 08:09:30,874] Trial 21 finished with value: 0.4285 and parameters: {'line_iters': 10, 'alpha': 0.860652075119673, 'eta': 0.00012735647482429953}. Best is trial 21 with value: 0.4285.
[I 2022-10-17 08:09:36,602] Trial 22 finished with value: 0.4305 and parameters: {'line_iters': 10, 'alpha': 0.7838194955664664, 'eta': 8.447716250467433e-05}. Best is trial 22 with value: 0.4305.
[I 2022-10-17 08:09:45,092] Trial 23 finished with value: 0.0845 and parameters: {'line_iters': 15, 'alpha': 0.3904700659528319, 'eta': 8.764819335465032e-05}. Best is trial 22 with value: 0.4305.
[I 2022-10-17 08:09:50,790] Trial 24 finished with value: 0.082 and parameters: {'line_iters': 10, 'alpha': 0.017680643693515616, 'eta': 9.001312742608908e-05}. Best is trial 22 with value: 0.4305.
[I 2022-10-17 08:10:02,153] Trial 25 finished with value: 0.088 and parameters: {'line_iters': 20, 'alpha': 0.13902176457020504, 'eta': 0.14968317886524032}. Best is trial 22 with value: 0.4305.
[I 2022-10-17 08:10:08,458] Trial 26 finished with value: 0.396 and parameters: {'line_iters': 11, 'alpha': 1.585694904560528, 'eta': 0.015470555425072969}. Best is trial 22 with value: 0.4305.
[I 2022-10-17 08:10:10,831] Trial 27 finished with value: 0.081 and parameters: {'line_iters': 4, 'alpha': 0.005591468302802004, 'eta': 4.420823883813146e-05}. Best is trial 22 with value: 0.4305.
[I 2022-10-17 08:10:14,290] Trial 28 finished with value: 0.417 and parameters: {'line_iters': 6, 'alpha': 1.2828157953502095, 'eta': 0.0002441842588474963}. Best is trial 22 with value: 0.4305.
In [ ]:
%%time
study_name = "StochasticLogisticMSERegressionMulit"  # Unique identifier of the study.
CV_RESULT_DIR = os.getcwd()+f"/{study_name}/"
if not os.path.exists(CV_RESULT_DIR):  os.mkdir(CV_RESULT_DIR)
storage_name = "sqlite:///{}.db".format(study_name)

def objective(trial):
    param = {
        "iterations": trial.suggest_int("iterations", 50, 5000, log=True),
        "alpha": trial.suggest_float("alpha", 1e-8, 10., log=True),
        "eta" : trial.suggest_float("eta", 1e-8, 1.0, log=True),
        "C": trial.suggest_float("C", 1e-8, 10, log=True),
        "do_C_alpha": True
            }
    clf = StochasticLogisticMSERegressionMulit(**param)
    clf.fit(x_train.to_numpy(),y_train.to_numpy())
    yhat = clf.predict(x_train1)
    acc = accuracy_score(y_train1,yhat)
    return acc


# pruner = optuna.pruners.MedianPruner(n_warmup_steps=5)
# pruner = optuna.pruners.HyperbandPruner()
study = optuna.create_study(direction="maximize",storage=storage_name,study_name=study_name)
study.optimize(objective, n_trials=400)

print("Best trial:")
trial = study.best_trial

print("  Value: {}".format(trial.value))

print("  Params: ")
for key, value in trial.params.items():
    print("    {}: {}".format(key, value))
[I 2022-10-18 21:51:40,173] A new study created in RDB with name: StochasticLogisticMSERegressionMulit
[I 2022-10-18 21:51:40,574] Trial 0 finished with value: 0.693125 and parameters: {'iterations': 3925, 'alpha': 9.503862986256443e-06, 'eta': 0.0011266278501225247, 'C': 7.092682358260865e-05}. Best is trial 0 with value: 0.693125.
[I 2022-10-18 21:51:40,727] Trial 1 finished with value: 0.636375 and parameters: {'iterations': 171, 'alpha': 0.1399990436137353, 'eta': 2.3066144904697167e-07, 'C': 0.0004562209740428278}. Best is trial 0 with value: 0.693125.
[I 2022-10-18 21:51:41,006] Trial 2 finished with value: 0.5970625 and parameters: {'iterations': 2441, 'alpha': 0.0016490630420215532, 'eta': 2.1769535237170717e-07, 'C': 0.0028693390707477734}. Best is trial 0 with value: 0.693125.
[I 2022-10-18 21:51:41,281] Trial 3 finished with value: 0.6824375 and parameters: {'iterations': 2352, 'alpha': 0.022464589209953974, 'eta': 0.00021979551877576798, 'C': 0.00031843264896372375}. Best is trial 0 with value: 0.693125.
[I 2022-10-18 21:51:41,530] Trial 4 finished with value: 0.58025 and parameters: {'iterations': 921, 'alpha': 7.911773291242899e-08, 'eta': 1.2800412538851745e-08, 'C': 0.028647237084875706}. Best is trial 0 with value: 0.693125.
[I 2022-10-18 21:51:41,747] Trial 5 finished with value: 0.59475 and parameters: {'iterations': 1289, 'alpha': 1.0155461718080792, 'eta': 3.601869424496089e-05, 'C': 0.040245613999495194}. Best is trial 0 with value: 0.693125.
[I 2022-10-18 21:51:41,923] Trial 6 finished with value: 0.5939375 and parameters: {'iterations': 179, 'alpha': 5.1055925794006925e-06, 'eta': 1.0401202774108129e-07, 'C': 0.05646178231200102}. Best is trial 0 with value: 0.693125.
[I 2022-10-18 21:51:42,268] Trial 7 finished with value: 0.674 and parameters: {'iterations': 3806, 'alpha': 0.0815192171273738, 'eta': 8.437184276222792e-05, 'C': 0.4769979632903394}. Best is trial 0 with value: 0.693125.
[I 2022-10-18 21:51:42,437] Trial 8 finished with value: 0.59475 and parameters: {'iterations': 581, 'alpha': 4.857496253910788, 'eta': 1.1712782110920171e-08, 'C': 0.005484785980304352}. Best is trial 0 with value: 0.693125.
[I 2022-10-18 21:51:42,597] Trial 9 finished with value: 0.5655625 and parameters: {'iterations': 426, 'alpha': 1.586913003399788e-08, 'eta': 1.4346408459536291e-06, 'C': 0.0008778704713288941}. Best is trial 0 with value: 0.693125.
[I 2022-10-18 21:51:42,757] Trial 10 finished with value: 0.2770625 and parameters: {'iterations': 98, 'alpha': 1.7255471541927646e-05, 'eta': 0.156670317234937, 'C': 5.169263354367282e-07}. Best is trial 0 with value: 0.693125.
[I 2022-10-18 21:51:43,151] Trial 11 finished with value: 0.6846875 and parameters: {'iterations': 4372, 'alpha': 0.0009406212750874213, 'eta': 0.011784718665874005, 'C': 4.16105759073779e-06}. Best is trial 0 with value: 0.693125.
[I 2022-10-18 21:51:43,541] Trial 12 finished with value: 0.6573125 and parameters: {'iterations': 4337, 'alpha': 0.00011054783313321942, 'eta': 0.022120468335722666, 'C': 1.510060170003367e-06}. Best is trial 0 with value: 0.693125.
[I 2022-10-18 21:51:43,799] Trial 13 finished with value: 0.67025 and parameters: {'iterations': 1861, 'alpha': 6.857471496567746e-07, 'eta': 0.0041605786864917695, 'C': 9.131068891365995e-06}. Best is trial 0 with value: 0.693125.
[I 2022-10-18 21:51:44,246] Trial 14 finished with value: 0.59475 and parameters: {'iterations': 4896, 'alpha': 0.0018995083861608336, 'eta': 0.7528287033193563, 'C': 4.186460676772783e-08}. Best is trial 0 with value: 0.693125.
[I 2022-10-18 21:51:44,406] Trial 15 finished with value: 0.5796875 and parameters: {'iterations': 51, 'alpha': 0.00011776850507945259, 'eta': 0.0012106919012380927, 'C': 1.6656431519297096e-05}. Best is trial 0 with value: 0.693125.
[I 2022-10-18 21:51:44,610] Trial 16 finished with value: 0.6430625 and parameters: {'iterations': 826, 'alpha': 1.4319010315173114e-06, 'eta': 0.018920498889462043, 'C': 2.8718875200133962e-05}. Best is trial 0 with value: 0.693125.
[I 2022-10-18 21:51:44,852] Trial 17 finished with value: 0.6055 and parameters: {'iterations': 1503, 'alpha': 0.001552781133436109, 'eta': 1.266605274526695e-05, 'C': 3.358467778259523e-08}. Best is trial 0 with value: 0.693125.
[I 2022-10-18 21:51:45,163] Trial 18 finished with value: 0.5955625 and parameters: {'iterations': 2791, 'alpha': 1.73333206295854e-05, 'eta': 0.0005003070824486594, 'C': 6.52927179873536}. Best is trial 0 with value: 0.693125.
[I 2022-10-18 21:51:45,339] Trial 19 finished with value: 0.68725 and parameters: {'iterations': 353, 'alpha': 0.017406185847034177, 'eta': 0.004880177592019467, 'C': 3.899402451142314e-07}. Best is trial 0 with value: 0.693125.
[I 2022-10-18 21:51:45,515] Trial 20 finished with value: 0.4236875 and parameters: {'iterations': 344, 'alpha': 0.006782237713534221, 'eta': 0.13821486638680536, 'C': 2.575249061826286e-07}. Best is trial 0 with value: 0.693125.
[I 2022-10-18 21:51:45,687] Trial 21 finished with value: 0.691375 and parameters: {'iterations': 291, 'alpha': 0.00027489249364291365, 'eta': 0.005496178323521231, 'C': 4.9335020045755645e-05}. Best is trial 0 with value: 0.693125.
[I 2022-10-18 21:51:45,859] Trial 22 finished with value: 0.6953125 and parameters: {'iterations': 260, 'alpha': 0.00013909689244341158, 'eta': 0.0023875043544274942, 'C': 6.778749823691853e-05}. Best is trial 22 with value: 0.6953125.
[I 2022-10-18 21:51:46,031] Trial 23 finished with value: 0.591 and parameters: {'iterations': 257, 'alpha': 6.85839088099455e-05, 'eta': 0.0014346468244212612, 'C': 0.00010475027963053021}. Best is trial 22 with value: 0.6953125.
[I 2022-10-18 21:51:46,196] Trial 24 finished with value: 0.562125 and parameters: {'iterations': 136, 'alpha': 5.989671667524733e-07, 'eta': 0.046982741673743396, 'C': 5.711644731077027e-05}. Best is trial 22 with value: 0.6953125.
[I 2022-10-18 21:51:46,385] Trial 25 finished with value: 0.5750625 and parameters: {'iterations': 581, 'alpha': 2.0601240066725195e-05, 'eta': 9.9418894490457e-06, 'C': 9.398451022233037e-05}. Best is trial 22 with value: 0.6953125.
[I 2022-10-18 21:51:46,546] Trial 26 finished with value: 0.5255625 and parameters: {'iterations': 79, 'alpha': 0.00022078684017771684, 'eta': 0.0010793412748044276, 'C': 3.0761030055196385e-06}. Best is trial 22 with value: 0.6953125.
[I 2022-10-18 21:51:46,717] Trial 27 finished with value: 0.5910625 and parameters: {'iterations': 275, 'alpha': 3.2330184029491354e-06, 'eta': 0.00019180346618179658, 'C': 0.002816115146898409}. Best is trial 22 with value: 0.6953125.
[I 2022-10-18 21:51:46,890] Trial 28 finished with value: 0.676375 and parameters: {'iterations': 210, 'alpha': 0.0004340862827201933, 'eta': 0.0026711968947430408, 'C': 0.00013967489046096793}. Best is trial 22 with value: 0.6953125.
[I 2022-10-18 21:51:47,058] Trial 29 finished with value: 0.12625 and parameters: {'iterations': 144, 'alpha': 2.420558677140342e-07, 'eta': 0.11520374841400044, 'C': 0.0004942985550594622}. Best is trial 22 with value: 0.6953125.
[I 2022-10-18 21:51:47,262] Trial 30 finished with value: 0.6685 and parameters: {'iterations': 826, 'alpha': 2.2723872025751273e-05, 'eta': 0.0004387747351414913, 'C': 1.1963932229033528e-05}. Best is trial 22 with value: 0.6953125.
[I 2022-10-18 21:51:47,443] Trial 31 finished with value: 0.686625 and parameters: {'iterations': 424, 'alpha': 0.014639357229874857, 'eta': 0.004364573190138324, 'C': 4.434062853161552e-07}. Best is trial 22 with value: 0.6953125.
[I 2022-10-18 21:51:47,617] Trial 32 finished with value: 0.62775 and parameters: {'iterations': 321, 'alpha': 0.14694226406556302, 'eta': 0.00890604173924972, 'C': 0.001321307496174023}. Best is trial 22 with value: 0.6953125.
[I 2022-10-18 21:51:47,785] Trial 33 finished with value: 0.603 and parameters: {'iterations': 231, 'alpha': 0.005264370065456095, 'eta': 6.475425569449547e-05, 'C': 1.065379222430241e-08}. Best is trial 22 with value: 0.6953125.
[I 2022-10-18 21:51:47,971] Trial 34 finished with value: 0.554625 and parameters: {'iterations': 516, 'alpha': 0.07525743515977203, 'eta': 0.0382593790945584, 'C': 1.577018993878703e-07}. Best is trial 22 with value: 0.6953125.
[I 2022-10-18 21:51:48,190] Trial 35 finished with value: 0.672 and parameters: {'iterations': 1094, 'alpha': 0.00035313610306137355, 'eta': 0.0053328523186936035, 'C': 0.0074093102703442154}. Best is trial 22 with value: 0.6953125.
[I 2022-10-18 21:51:48,358] Trial 36 finished with value: 0.5929375 and parameters: {'iterations': 174, 'alpha': 0.6021819011667159, 'eta': 0.00033524974882019625, 'C': 1.1456924857207105e-06}. Best is trial 22 with value: 0.6953125.
[I 2022-10-18 21:51:48,561] Trial 37 finished with value: 0.59475 and parameters: {'iterations': 698, 'alpha': 5.090651297656488e-05, 'eta': 0.7994579378064841, 'C': 0.0003622815025046143}. Best is trial 22 with value: 0.6953125.
[I 2022-10-18 21:51:48,740] Trial 38 finished with value: 0.6191875 and parameters: {'iterations': 370, 'alpha': 0.029071706547608423, 'eta': 2.2582716338059496e-05, 'C': 4.540332813929371e-06}. Best is trial 22 with value: 0.6953125.
[I 2022-10-18 21:51:48,904] Trial 39 finished with value: 0.5423125 and parameters: {'iterations': 116, 'alpha': 0.0038252747181180802, 'eta': 0.0001590025717836744, 'C': 3.913772270929128e-05}. Best is trial 22 with value: 0.6953125.
[I 2022-10-18 21:51:49,078] Trial 40 finished with value: 0.6885 and parameters: {'iterations': 298, 'alpha': 9.258875285455818e-06, 'eta': 0.0016081088622651577, 'C': 0.02017293996507285}. Best is trial 22 with value: 0.6953125.
[I 2022-10-18 21:51:49,247] Trial 41 finished with value: 0.6711875 and parameters: {'iterations': 209, 'alpha': 1.918530561091591e-06, 'eta': 0.0018350797329507865, 'C': 0.19453355054761629}. Best is trial 22 with value: 0.6953125.
[I 2022-10-18 21:51:49,421] Trial 42 finished with value: 0.6325625 and parameters: {'iterations': 290, 'alpha': 4.497517431882114e-06, 'eta': 0.0007248840639382398, 'C': 0.01647092932001255}. Best is trial 22 with value: 0.6953125.
[I 2022-10-18 21:51:49,608] Trial 43 finished with value: 0.682875 and parameters: {'iterations': 474, 'alpha': 7.483075195462946e-06, 'eta': 0.0077535824157440245, 'C': 0.1337415037621037}. Best is trial 22 with value: 0.6953125.
[I 2022-10-18 21:51:49,791] Trial 44 finished with value: 0.627 and parameters: {'iterations': 385, 'alpha': 4.985094760651108e-08, 'eta': 0.052103505137328476, 'C': 0.0018564919539164366}. Best is trial 22 with value: 0.6953125.
[I 2022-10-18 21:51:49,984] Trial 45 finished with value: 0.7011875 and parameters: {'iterations': 590, 'alpha': 0.0005233700926595125, 'eta': 0.003086307996366359, 'C': 0.00033457633856390424}. Best is trial 45 with value: 0.7011875.
[I 2022-10-18 21:51:50,287] Trial 46 finished with value: 0.641625 and parameters: {'iterations': 2499, 'alpha': 0.0006085208790028681, 'eta': 7.669372252621992e-05, 'C': 0.0002653780939340337}. Best is trial 45 with value: 0.7011875.
[I 2022-10-18 21:51:50,499] Trial 47 finished with value: 0.68575 and parameters: {'iterations': 657, 'alpha': 0.0001802004624835334, 'eta': 0.016036550367566446, 'C': 0.005876793499718464}. Best is trial 45 with value: 0.7011875.
[I 2022-10-18 21:51:50,742] Trial 48 finished with value: 0.6235625 and parameters: {'iterations': 1568, 'alpha': 3.9232334716881314e-05, 'eta': 0.0027011860728971488, 'C': 1.9602733642329901}. Best is trial 45 with value: 0.7011875.
[I 2022-10-18 21:51:51,083] Trial 49 finished with value: 0.691875 and parameters: {'iterations': 3324, 'alpha': 1.0083753654104075e-05, 'eta': 0.0007245960115307385, 'C': 0.01824467695153051}. Best is trial 45 with value: 0.7011875.
[I 2022-10-18 21:51:51,445] Trial 50 finished with value: 0.6898125 and parameters: {'iterations': 3748, 'alpha': 0.00010312681848131788, 'eta': 0.00037972320429918423, 'C': 0.0006654405216758592}. Best is trial 45 with value: 0.7011875.
[I 2022-10-18 21:51:51,782] Trial 51 finished with value: 0.6699375 and parameters: {'iterations': 3320, 'alpha': 0.0012683192243869417, 'eta': 0.00025831657352410307, 'C': 0.00022443175593317907}. Best is trial 45 with value: 0.7011875.
[I 2022-10-18 21:51:52,051] Trial 52 finished with value: 0.671875 and parameters: {'iterations': 2013, 'alpha': 9.128676592688586e-05, 'eta': 0.0007303997667094821, 'C': 0.0008019071975607737}. Best is trial 45 with value: 0.7011875.
[I 2022-10-18 21:51:52,389] Trial 53 finished with value: 0.625625 and parameters: {'iterations': 3206, 'alpha': 4.266039587360992e-05, 'eta': 5.0425524340144436e-05, 'C': 3.016778082021173e-05}. Best is trial 45 with value: 0.7011875.
[I 2022-10-18 21:51:52,765] Trial 54 finished with value: 0.599375 and parameters: {'iterations': 3973, 'alpha': 0.0007722282106317305, 'eta': 2.7651240964649674e-08, 'C': 0.0006435253072187802}. Best is trial 45 with value: 0.7011875.
[I 2022-10-18 21:51:53,094] Trial 55 finished with value: 0.60525 and parameters: {'iterations': 3194, 'alpha': 0.00018558153445820638, 'eta': 5.024088361906587e-06, 'C': 9.165053802092968e-05}. Best is trial 45 with value: 0.7011875.
[I 2022-10-18 21:51:53,509] Trial 56 finished with value: 0.652125 and parameters: {'iterations': 4704, 'alpha': 1.259239309109037e-05, 'eta': 0.00012905076894007006, 'C': 2.1053468163899818e-05}. Best is trial 45 with value: 0.7011875.
[I 2022-10-18 21:51:53,776] Trial 57 finished with value: 0.69275 and parameters: {'iterations': 1978, 'alpha': 2.998893258298717e-05, 'eta': 0.0009117420785434878, 'C': 7.182809925987342e-06}. Best is trial 45 with value: 0.7011875.
[I 2022-10-18 21:51:54,001] Trial 58 finished with value: 0.6816875 and parameters: {'iterations': 1170, 'alpha': 1.051281374867506e-06, 'eta': 0.0026856397054020593, 'C': 7.027510593983144e-06}. Best is trial 45 with value: 0.7011875.
[I 2022-10-18 21:51:54,266] Trial 59 finished with value: 0.685125 and parameters: {'iterations': 1856, 'alpha': 3.021677167443715e-07, 'eta': 0.0007564846827737375, 'C': 2.0935263531350955e-06}. Best is trial 45 with value: 0.7011875.
[I 2022-10-18 21:51:54,548] Trial 60 finished with value: 0.59475 and parameters: {'iterations': 2204, 'alpha': 0.0018966465407464547, 'eta': 0.2911600960284489, 'C': 6.343969651238795e-05}. Best is trial 45 with value: 0.7011875.
[I 2022-10-18 21:51:54,860] Trial 61 finished with value: 0.6935 and parameters: {'iterations': 2778, 'alpha': 0.00010339701915901837, 'eta': 0.00036734881789117865, 'C': 1.426347470728588e-05}. Best is trial 45 with value: 0.7011875.
[I 2022-10-18 21:51:55,166] Trial 62 finished with value: 0.7188125 and parameters: {'iterations': 2668, 'alpha': 0.0003095332949094324, 'eta': 0.01034523897485303, 'C': 1.0850139960882207e-05}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:51:55,498] Trial 63 finished with value: 0.6283125 and parameters: {'iterations': 3075, 'alpha': 3.014543197013129e-06, 'eta': 0.02466791017718164, 'C': 8.339035982486875e-06}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:51:55,789] Trial 64 finished with value: 0.6996875 and parameters: {'iterations': 2425, 'alpha': 3.0383333916546093e-05, 'eta': 0.009288946898134802, 'C': 1.0157632267419861e-06}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:51:56,092] Trial 65 finished with value: 0.7031875 and parameters: {'iterations': 2588, 'alpha': 2.7334502742439912e-05, 'eta': 0.002672474285037094, 'C': 1.041697251822493e-06}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:51:56,396] Trial 66 finished with value: 0.7111875 and parameters: {'iterations': 2422, 'alpha': 0.0004220844586771267, 'eta': 0.011242640914786928, 'C': 7.952428576308467e-07}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:51:56,639] Trial 67 finished with value: 0.704375 and parameters: {'iterations': 1551, 'alpha': 0.002810580977985804, 'eta': 0.010925118970497635, 'C': 7.106997044642805e-07}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:51:56,881] Trial 68 finished with value: 0.1224375 and parameters: {'iterations': 1468, 'alpha': 0.00039355504138631877, 'eta': 0.07862203725772537, 'C': 1.0347831638604064e-06}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:51:57,131] Trial 69 finished with value: 0.60675 and parameters: {'iterations': 1679, 'alpha': 0.00319450406990542, 'eta': 0.009723372973379234, 'C': 9.476352821917798e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:51:57,432] Trial 70 finished with value: 0.676875 and parameters: {'iterations': 2507, 'alpha': 0.0008146935809722336, 'eta': 0.03191420037480271, 'C': 1.9516252823014954e-07}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:51:57,747] Trial 71 finished with value: 0.6545 and parameters: {'iterations': 2753, 'alpha': 0.00012054830994860405, 'eta': 0.015767271793652064, 'C': 7.19342768281664e-07}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:51:58,041] Trial 72 finished with value: 0.698125 and parameters: {'iterations': 2246, 'alpha': 6.726245969071077e-05, 'eta': 0.003153242823226115, 'C': 2.1478313853425505e-06}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:51:58,285] Trial 73 finished with value: 0.641625 and parameters: {'iterations': 1376, 'alpha': 0.009076568387259645, 'eta': 0.007377370962809219, 'C': 2.549514379265647e-06}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:51:58,572] Trial 74 finished with value: 0.7008125 and parameters: {'iterations': 2252, 'alpha': 0.00033521976840427054, 'eta': 0.0034271146039066033, 'C': 6.593978925525945e-07}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:51:58,863] Trial 75 finished with value: 0.69225 and parameters: {'iterations': 2283, 'alpha': 0.002105983635863264, 'eta': 0.003565846383325967, 'C': 1.668695518695301e-06}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:51:59,079] Trial 76 finished with value: 0.48125 and parameters: {'iterations': 947, 'alpha': 0.000278377333603532, 'eta': 0.20385091706764374, 'C': 7.772618716253881e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:51:59,332] Trial 77 finished with value: 0.3476875 and parameters: {'iterations': 1666, 'alpha': 6.220613272545566e-05, 'eta': 0.010853244549688884, 'C': 3.721946386175886e-07}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:51:59,615] Trial 78 finished with value: 0.4575 and parameters: {'iterations': 2205, 'alpha': 0.0005964896615187657, 'eta': 0.07971129958563707, 'C': 6.246180060791718e-07}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:51:59,844] Trial 79 finished with value: 0.6643125 and parameters: {'iterations': 1240, 'alpha': 0.0010533536579809275, 'eta': 0.00481954275460962, 'C': 4.26027391075947e-06}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:00,239] Trial 80 finished with value: 0.6245 and parameters: {'iterations': 3670, 'alpha': 2.2410155218844983e-05, 'eta': 0.018746885438401452, 'C': 4.404595105819098e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:00,550] Trial 81 finished with value: 0.6925 and parameters: {'iterations': 2524, 'alpha': 0.00020251990930247096, 'eta': 0.0018620742816455721, 'C': 1.3731497899628492e-06}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:00,816] Trial 82 finished with value: 0.7008125 and parameters: {'iterations': 1755, 'alpha': 0.0004293579470562121, 'eta': 0.006412326751296286, 'C': 3.1019750179080224e-07}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:01,081] Trial 83 finished with value: 0.2208125 and parameters: {'iterations': 1749, 'alpha': 0.002216707560530759, 'eta': 0.0068031241974343066, 'C': 3.0211844888501924e-07}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:01,361] Trial 84 finished with value: 0.7064375 and parameters: {'iterations': 2056, 'alpha': 0.00035762562840940647, 'eta': 0.012569203763300323, 'C': 1.1509588276555326e-07}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:01,661] Trial 85 finished with value: 0.4828125 and parameters: {'iterations': 2016, 'alpha': 0.0005231076041833636, 'eta': 0.013881968675609401, 'C': 1.4918535181828927e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:01,876] Trial 86 finished with value: 0.5095625 and parameters: {'iterations': 985, 'alpha': 0.00360550589269136, 'eta': 0.06488159568012165, 'C': 8.395192912768816e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:02,114] Trial 87 finished with value: 0.6741875 and parameters: {'iterations': 1343, 'alpha': 0.010851953596447908, 'eta': 0.028960640469637745, 'C': 5.811600604701814e-07}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:02,438] Trial 88 finished with value: 0.68975 and parameters: {'iterations': 2894, 'alpha': 0.00030330977456924715, 'eta': 0.011280500868019433, 'C': 1.3855048335285017e-07}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:02,661] Trial 89 finished with value: 0.621375 and parameters: {'iterations': 1086, 'alpha': 0.0012874887301069725, 'eta': 0.038638361950152515, 'C': 2.4133403082892747e-07}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:02,907] Trial 90 finished with value: 0.59475 and parameters: {'iterations': 1532, 'alpha': 5.036381194531705, 'eta': 0.0015464443226052421, 'C': 8.627684844697008e-07}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:03,186] Trial 91 finished with value: 0.697375 and parameters: {'iterations': 2132, 'alpha': 0.00044563161007962136, 'eta': 0.0027702649449535236, 'C': 3.5155276105754797e-06}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:03,492] Trial 92 finished with value: 0.6988125 and parameters: {'iterations': 2576, 'alpha': 6.582054149826947e-05, 'eta': 0.0035670139190619116, 'C': 2.8091600451339616e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:03,811] Trial 93 finished with value: 0.699375 and parameters: {'iterations': 2746, 'alpha': 3.4583220079806325e-05, 'eta': 0.0060016595084996785, 'C': 3.450100213178361e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:04,215] Trial 94 finished with value: 0.700875 and parameters: {'iterations': 4265, 'alpha': 0.00017347664530763807, 'eta': 0.006245700779780398, 'C': 5.242675518206694e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:04,619] Trial 95 finished with value: 0.647875 and parameters: {'iterations': 4268, 'alpha': 0.00013872948686022543, 'eta': 0.01900004145876298, 'C': 1.378571691817429e-07}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:04,883] Trial 96 finished with value: 0.699 and parameters: {'iterations': 1845, 'alpha': 0.00026064814516213013, 'eta': 0.007848674494307657, 'C': 6.116441467649704e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:05,244] Trial 97 finished with value: 0.6984375 and parameters: {'iterations': 3622, 'alpha': 0.0008070802282720236, 'eta': 0.004743406872417305, 'C': 3.411319815807196e-07}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:05,587] Trial 98 finished with value: 0.58275 and parameters: {'iterations': 3067, 'alpha': 0.006387135032663799, 'eta': 4.50240845689707e-07, 'C': 4.4038660309509035e-07}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:06,043] Trial 99 finished with value: 0.7019375 and parameters: {'iterations': 4703, 'alpha': 1.5028656458037947e-05, 'eta': 0.0010676757887202839, 'C': 2.2893324606610807e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:06,408] Trial 100 finished with value: 0.6666875 and parameters: {'iterations': 3617, 'alpha': 0.027433722174258133, 'eta': 0.0012083794531337142, 'C': 2.3791995805366653e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:06,701] Trial 101 finished with value: 0.7095625 and parameters: {'iterations': 2377, 'alpha': 1.334872551893341e-05, 'eta': 0.012236920195325988, 'C': 1.3822259319669896e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:07,107] Trial 102 finished with value: 0.681 and parameters: {'iterations': 4455, 'alpha': 4.7515380906215765e-06, 'eta': 0.02223728384402277, 'C': 1.4860185923618895e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:07,530] Trial 103 finished with value: 0.685625 and parameters: {'iterations': 4707, 'alpha': 7.811969623428195e-06, 'eta': 0.0020734039835038293, 'C': 1.7974329623130602e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:07,930] Trial 104 finished with value: 0.240625 and parameters: {'iterations': 4251, 'alpha': 1.6839071651814598e-05, 'eta': 0.014265819860884708, 'C': 1.3120482024134433e-07}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:08,388] Trial 105 finished with value: 0.71175 and parameters: {'iterations': 4900, 'alpha': 0.00016800923566067112, 'eta': 0.004162889392266359, 'C': 1.0752482662762674e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:08,833] Trial 106 finished with value: 0.714125 and parameters: {'iterations': 4941, 'alpha': 0.00016270289742659815, 'eta': 0.003966593201070944, 'C': 5.1371943385464865e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:09,268] Trial 107 finished with value: 0.683375 and parameters: {'iterations': 4842, 'alpha': 0.00016578549530360748, 'eta': 0.0010763258950001382, 'C': 5.306145399193777e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:09,657] Trial 108 finished with value: 0.694125 and parameters: {'iterations': 3980, 'alpha': 7.779295784558303e-05, 'eta': 0.0005089478728330342, 'C': 2.0748553108536905e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:10,013] Trial 109 finished with value: 0.67825 and parameters: {'iterations': 3481, 'alpha': 1.3835192691789893e-05, 'eta': 0.00438443395351995, 'C': 3.872260101116172e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:10,417] Trial 110 finished with value: 0.424 and parameters: {'iterations': 4227, 'alpha': 4.4051185752205835e-05, 'eta': 0.04120955481088163, 'C': 6.649197189684857e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:10,796] Trial 111 finished with value: 0.6990625 and parameters: {'iterations': 3916, 'alpha': 0.0005431754790499731, 'eta': 0.006198390212969016, 'C': 1.1520617885934404e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:11,221] Trial 112 finished with value: 0.652625 and parameters: {'iterations': 4825, 'alpha': 0.00019589938356155242, 'eta': 0.011117566911208918, 'C': 1.0902511446335328e-07}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:11,543] Trial 113 finished with value: 0.693 and parameters: {'iterations': 2864, 'alpha': 0.0013563088595314355, 'eta': 0.002215785773065455, 'C': 1.1243683938608369e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:11,877] Trial 114 finished with value: 0.6129375 and parameters: {'iterations': 3096, 'alpha': 0.00012078868407227472, 'eta': 0.02410628844368458, 'C': 2.8114780668397676e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:12,156] Trial 115 finished with value: 0.645 and parameters: {'iterations': 1994, 'alpha': 0.0008617477889314056, 'eta': 0.007925478659910166, 'C': 2.0364028537331346e-07}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:12,335] Trial 116 finished with value: 0.576125 and parameters: {'iterations': 69, 'alpha': 2.6226644485040865e-05, 'eta': 0.0014900029158942567, 'C': 1.976709275263201e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:12,808] Trial 117 finished with value: 0.7045625 and parameters: {'iterations': 4944, 'alpha': 0.00031607143956603376, 'eta': 0.00374160508863828, 'C': 0.0001532546580048644}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:13,203] Trial 118 finished with value: 0.7073125 and parameters: {'iterations': 4078, 'alpha': 0.00284855579260016, 'eta': 0.004120990729997684, 'C': 1.0251956951713655e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:13,561] Trial 119 finished with value: 0.70175 and parameters: {'iterations': 3390, 'alpha': 0.0029015261733106017, 'eta': 0.003624535041975471, 'C': 0.00014773024957641902}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:13,999] Trial 120 finished with value: 0.63075 and parameters: {'iterations': 4937, 'alpha': 0.0025608483752211065, 'eta': 0.013472510686014516, 'C': 0.00017333100999342336}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:14,350] Trial 121 finished with value: 0.6718125 and parameters: {'iterations': 3300, 'alpha': 0.0016302730424686193, 'eta': 0.003788141760953244, 'C': 3.653930275535947e-05}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:14,732] Trial 122 finished with value: 0.681625 and parameters: {'iterations': 3868, 'alpha': 0.006140812187975698, 'eta': 0.002204694219666194, 'C': 0.0026558171438989674}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:15,177] Trial 123 finished with value: 0.7034375 and parameters: {'iterations': 4977, 'alpha': 0.0032831536635923057, 'eta': 0.004341207286353042, 'C': 0.00032434283226299006}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:15,596] Trial 124 finished with value: 0.5978125 and parameters: {'iterations': 4511, 'alpha': 0.004472422205647692, 'eta': 0.009824194466157843, 'C': 0.00013152137628240856}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:15,956] Trial 125 finished with value: 0.6793125 and parameters: {'iterations': 3532, 'alpha': 0.018504955977526133, 'eta': 0.005226986896578318, 'C': 1.222544882332016e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:16,402] Trial 126 finished with value: 0.638 and parameters: {'iterations': 4951, 'alpha': 0.059384848789725206, 'eta': 0.017305930531865205, 'C': 3.2377665208270006e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:16,795] Trial 127 finished with value: 0.6950625 and parameters: {'iterations': 3939, 'alpha': 0.001151376763895955, 'eta': 0.0012445730123502295, 'C': 2.1177254299696258e-05}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:17,106] Trial 128 finished with value: 0.320375 and parameters: {'iterations': 2623, 'alpha': 0.0030426799392493384, 'eta': 0.0028795501763830365, 'C': 0.00030797696666569915}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:17,441] Trial 129 finished with value: 0.455125 and parameters: {'iterations': 2994, 'alpha': 3.200187871089841e-06, 'eta': 0.0006221984209616823, 'C': 1.0504571591464482e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:17,839] Trial 130 finished with value: 0.568375 and parameters: {'iterations': 4150, 'alpha': 6.845746202959853e-06, 'eta': 0.030612972490352452, 'C': 9.3872203979352e-05}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:18,259] Trial 131 finished with value: 0.70275 and parameters: {'iterations': 4561, 'alpha': 0.011403256121787043, 'eta': 0.0045541550716455236, 'C': 0.00035591424010925874}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:18,678] Trial 132 finished with value: 0.6109375 and parameters: {'iterations': 4513, 'alpha': 0.007606036871829851, 'eta': 0.004053047181800776, 'C': 0.0012097868213370388}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:19,033] Trial 133 finished with value: 0.6986875 and parameters: {'iterations': 3325, 'alpha': 0.0018707884838004129, 'eta': 0.009217914174611172, 'C': 5.672933452964768e-05}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:19,398] Trial 134 finished with value: 0.6855 and parameters: {'iterations': 3635, 'alpha': 0.004573574306436325, 'eta': 0.001991288042549573, 'C': 0.0002014782463537437}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:19,694] Trial 135 finished with value: 0.6901875 and parameters: {'iterations': 2366, 'alpha': 0.01592867957691491, 'eta': 0.0048304830585440914, 'C': 0.00048324518314691873}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:20,135] Trial 136 finished with value: 0.3313125 and parameters: {'iterations': 4936, 'alpha': 0.009796206952033149, 'eta': 0.013169866429588579, 'C': 0.0010104033105765651}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:20,560] Trial 137 finished with value: 0.682875 and parameters: {'iterations': 4423, 'alpha': 0.00030943563254491575, 'eta': 0.003034103501468105, 'C': 1.6159433341333514e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:20,946] Trial 138 finished with value: 0.6785625 and parameters: {'iterations': 3925, 'alpha': 0.04469452024759039, 'eta': 0.000977072304068742, 'C': 5.959372768698802e-06}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:21,299] Trial 139 finished with value: 0.6958125 and parameters: {'iterations': 3339, 'alpha': 0.0006764323596943814, 'eta': 0.0061681312442827246, 'C': 0.0004864294902568718}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:21,715] Trial 140 finished with value: 0.702375 and parameters: {'iterations': 4496, 'alpha': 2.074175459602053e-06, 'eta': 0.008302990587247074, 'C': 2.1272264538636018e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:22,124] Trial 141 finished with value: 0.1941875 and parameters: {'iterations': 4392, 'alpha': 1.9016424069531476e-06, 'eta': 0.007458517951140871, 'C': 2.4379271944596703e-08}. Best is trial 62 with value: 0.7188125.
[I 2022-10-18 21:52:22,567] Trial 142 finished with value: 0.7213125 and parameters: {'iterations': 4975, 'alpha': 6.741259946012485e-07, 'eta': 0.010485021012663442, 'C': 3.341918028602051e-08}. Best is trial 142 with value: 0.7213125.
[I 2022-10-18 21:52:22,995] Trial 143 finished with value: 0.703625 and parameters: {'iterations': 4637, 'alpha': 1.054171419223515e-06, 'eta': 0.05329845737822124, 'C': 4.0972012018943927e-08}. Best is trial 142 with value: 0.7213125.
[I 2022-10-18 21:52:23,386] Trial 144 finished with value: 0.71325 and parameters: {'iterations': 4042, 'alpha': 3.953320152272408e-07, 'eta': 0.019457011943073615, 'C': 7.637286602531372e-08}. Best is trial 142 with value: 0.7213125.
[I 2022-10-18 21:52:23,778] Trial 145 finished with value: 0.535125 and parameters: {'iterations': 4037, 'alpha': 3.828846935863582e-07, 'eta': 0.05535919725636706, 'C': 7.622594925422788e-08}. Best is trial 142 with value: 0.7213125.
[I 2022-10-18 21:52:24,152] Trial 146 finished with value: 0.2978125 and parameters: {'iterations': 3739, 'alpha': 4.539164518554224e-08, 'eta': 0.13800846600721486, 'C': 4.829087576936878e-08}. Best is trial 142 with value: 0.7213125.
[I 2022-10-18 21:52:24,591] Trial 147 finished with value: 0.51925 and parameters: {'iterations': 4899, 'alpha': 9.776829891458963e-07, 'eta': 0.08010349491383416, 'C': 3.855417444844224e-08}. Best is trial 142 with value: 0.7213125.
[I 2022-10-18 21:52:24,913] Trial 148 finished with value: 0.667 and parameters: {'iterations': 2789, 'alpha': 1.1613894297929649e-07, 'eta': 0.02613231714082321, 'C': 1.628989046652014e-07}. Best is trial 142 with value: 0.7213125.
[I 2022-10-18 21:52:25,311] Trial 149 finished with value: 0.7238125 and parameters: {'iterations': 4162, 'alpha': 5.226184376578289e-07, 'eta': 0.018786539752659107, 'C': 8.677100156861022e-08}. Best is trial 149 with value: 0.7238125.
[I 2022-10-18 21:52:25,756] Trial 150 finished with value: 0.4954375 and parameters: {'iterations': 4997, 'alpha': 5.583081357804034e-07, 'eta': 0.040496670722824576, 'C': 1.0319713452219649e-07}. Best is trial 149 with value: 0.7238125.
[I 2022-10-18 21:52:26,151] Trial 151 finished with value: 0.6113125 and parameters: {'iterations': 4071, 'alpha': 1.0791527727220512e-07, 'eta': 0.02382969521149362, 'C': 6.573311809531995e-08}. Best is trial 149 with value: 0.7238125.
[I 2022-10-18 21:52:26,554] Trial 152 finished with value: 0.699625 and parameters: {'iterations': 4270, 'alpha': 2.0319006019432433e-07, 'eta': 0.017030906630060842, 'C': 4.699437638755114e-08}. Best is trial 149 with value: 0.7238125.
[I 2022-10-18 21:52:26,923] Trial 153 finished with value: 0.6999375 and parameters: {'iterations': 3593, 'alpha': 5.702902866069286e-07, 'eta': 0.011740005618612125, 'C': 2.2672319756821856e-07}. Best is trial 149 with value: 0.7238125.
[I 2022-10-18 21:52:27,210] Trial 154 finished with value: 0.6781875 and parameters: {'iterations': 2101, 'alpha': 6.176580267986713e-07, 'eta': 0.01932530462869114, 'C': 1.008259690836754e-07}. Best is trial 149 with value: 0.7238125.
[I 2022-10-18 21:52:27,512] Trial 155 finished with value: 0.68475 and parameters: {'iterations': 2431, 'alpha': 1.226182707994406e-06, 'eta': 0.013370647223586888, 'C': 1.4520947927978131e-06}. Best is trial 149 with value: 0.7238125.
[I 2022-10-18 21:52:27,851] Trial 156 finished with value: 0.59475 and parameters: {'iterations': 3059, 'alpha': 0.00026671477561945693, 'eta': 0.2290292206457031, 'C': 3.492070267192143e-08}. Best is trial 149 with value: 0.7238125.
[I 2022-10-18 21:52:28,270] Trial 157 finished with value: 0.53575 and parameters: {'iterations': 4499, 'alpha': 0.00010324056466659713, 'eta': 0.052927202084624106, 'C': 1.765678689570081e-08}. Best is trial 149 with value: 0.7238125.
[I 2022-10-18 21:52:28,715] Trial 158 finished with value: 0.7180625 and parameters: {'iterations': 4995, 'alpha': 2.434944865457863e-07, 'eta': 0.009727177890343046, 'C': 4.414262391794747e-07}. Best is trial 149 with value: 0.7238125.
[I 2022-10-18 21:52:29,096] Trial 159 finished with value: 0.714 and parameters: {'iterations': 3867, 'alpha': 2.4898476209280174e-07, 'eta': 0.009594372424899507, 'C': 4.781817012328489e-07}. Best is trial 149 with value: 0.7238125.
[I 2022-10-18 21:52:29,484] Trial 160 finished with value: 0.67125 and parameters: {'iterations': 3998, 'alpha': 1.697007017767324e-07, 'eta': 0.03674970635538325, 'C': 2.765963292659075e-07}. Best is trial 149 with value: 0.7238125.
[I 2022-10-18 21:52:29,890] Trial 161 finished with value: 0.59475 and parameters: {'iterations': 4249, 'alpha': 3.640482380679049e-07, 'eta': 0.5235116286330421, 'C': 6.061344925684549e-07}. Best is trial 149 with value: 0.7238125.
[I 2022-10-18 21:52:30,335] Trial 162 finished with value: 0.7263125 and parameters: {'iterations': 4975, 'alpha': 2.808178583601693e-07, 'eta': 0.010324247314800299, 'C': 1.0498931474061135e-06}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:30,717] Trial 163 finished with value: 0.626375 and parameters: {'iterations': 3821, 'alpha': 6.835701455193496e-08, 'eta': 0.009865547790421633, 'C': 4.782488428010185e-07}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:31,164] Trial 164 finished with value: 0.44175 and parameters: {'iterations': 4999, 'alpha': 2.941806908323354e-07, 'eta': 0.09327234513474154, 'C': 2.4792174323561156e-06}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:31,611] Trial 165 finished with value: 0.68525 and parameters: {'iterations': 4997, 'alpha': 1.473272711066133e-08, 'eta': 0.018658988879313297, 'C': 8.512601226769102e-07}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:32,034] Trial 166 finished with value: 0.25375 and parameters: {'iterations': 4575, 'alpha': 3.726961163181987e-07, 'eta': 0.026156256460308985, 'C': 1.832123289288865e-07}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:32,437] Trial 167 finished with value: 0.691 and parameters: {'iterations': 4181, 'alpha': 1.5416392579014378e-07, 'eta': 0.013364053056304817, 'C': 4.248506545685818e-07}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:32,808] Trial 168 finished with value: 0.6990625 and parameters: {'iterations': 3522, 'alpha': 2.8171710430570604e-07, 'eta': 0.007428153619466326, 'C': 7.681105607184646e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:33,221] Trial 169 finished with value: 0.588625 and parameters: {'iterations': 4472, 'alpha': 6.62964689578263e-07, 'eta': 0.009856115383208078, 'C': 1.3490761926923882e-07}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:33,601] Trial 170 finished with value: 0.707 and parameters: {'iterations': 3847, 'alpha': 7.966952555429378e-07, 'eta': 0.017203792307471887, 'C': 1.6548019277472664e-06}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:33,976] Trial 171 finished with value: 0.68525 and parameters: {'iterations': 3793, 'alpha': 2.8823904547644963e-08, 'eta': 0.02067223801330976, 'C': 1.2463650322137895e-06}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:34,364] Trial 172 finished with value: 0.709875 and parameters: {'iterations': 4007, 'alpha': 8.609011954686385e-07, 'eta': 0.006336297049863159, 'C': 1.0555965406255951e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:34,701] Trial 173 finished with value: 0.71175 and parameters: {'iterations': 3077, 'alpha': 1.2685334005006497e-06, 'eta': 0.03368153839265407, 'C': 1.0503951623415675e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:35,031] Trial 174 finished with value: 0.7181875 and parameters: {'iterations': 2903, 'alpha': 4.457635466545034e-07, 'eta': 0.029971389612358998, 'C': 1.1137437074393884e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:35,376] Trial 175 finished with value: 0.3286875 and parameters: {'iterations': 3107, 'alpha': 8.71174618463818e-07, 'eta': 0.029195552695960674, 'C': 1.0424073727957006e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:35,732] Trial 176 finished with value: 0.6805625 and parameters: {'iterations': 3324, 'alpha': 1.6032965819026048e-06, 'eta': 0.01616060403161869, 'C': 1.802532124677195e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:36,059] Trial 177 finished with value: 0.653 and parameters: {'iterations': 2718, 'alpha': 4.388502736063419e-07, 'eta': 0.030947748960698192, 'C': 1.3026073647978207e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:36,404] Trial 178 finished with value: 0.682625 and parameters: {'iterations': 2933, 'alpha': 1.934736130631993e-07, 'eta': 0.005960028244385023, 'C': 1.0034142083809248e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:36,791] Trial 179 finished with value: 0.7138125 and parameters: {'iterations': 3682, 'alpha': 4.799945472287292e-07, 'eta': 0.013278495524993935, 'C': 2.8083637670150103e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:37,158] Trial 180 finished with value: 0.6911875 and parameters: {'iterations': 3447, 'alpha': 4.940833014267333e-07, 'eta': 0.01425669056338158, 'C': 2.7560238920865927e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:37,540] Trial 181 finished with value: 0.70125 and parameters: {'iterations': 3859, 'alpha': 2.568270533265807e-07, 'eta': 0.009423013261103978, 'C': 1.5097649505779646e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:37,885] Trial 182 finished with value: 0.6254375 and parameters: {'iterations': 3195, 'alpha': 2.5759539241773045e-06, 'eta': 0.018591095786671685, 'C': 2.4738810611976397e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:38,279] Trial 183 finished with value: 0.675875 and parameters: {'iterations': 4136, 'alpha': 7.969260593741489e-07, 'eta': 0.006881914332624402, 'C': 1.5938568535721065e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:38,650] Trial 184 finished with value: 0.3473125 and parameters: {'iterations': 3742, 'alpha': 1.0894586087790835e-07, 'eta': 0.03826549029784693, 'C': 3.586228423060424e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:38,978] Trial 185 finished with value: 0.7145625 and parameters: {'iterations': 2916, 'alpha': 1.0592976275217851e-06, 'eta': 0.011608933578082794, 'C': 2.3625115270387348e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:39,310] Trial 186 finished with value: 0.7170625 and parameters: {'iterations': 2956, 'alpha': 1.2891381754255339e-06, 'eta': 0.012086127949779788, 'C': 1.0027293662677562e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:39,629] Trial 187 finished with value: 0.71525 and parameters: {'iterations': 2783, 'alpha': 1.131935752028687e-06, 'eta': 0.02468995760799818, 'C': 2.331971757824794e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:39,938] Trial 188 finished with value: 0.673375 and parameters: {'iterations': 2629, 'alpha': 1.359533959377785e-06, 'eta': 0.02301111685719044, 'C': 2.3440262233745992e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:40,266] Trial 189 finished with value: 0.582375 and parameters: {'iterations': 2900, 'alpha': 4.5075254975242145e-07, 'eta': 0.010987616873472928, 'C': 1.177965723592185e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:40,568] Trial 190 finished with value: 0.573625 and parameters: {'iterations': 2438, 'alpha': 1.3605493218318442e-06, 'eta': 0.03492148598073212, 'C': 1.780186030357102e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:40,902] Trial 191 finished with value: 0.7125625 and parameters: {'iterations': 3087, 'alpha': 7.282083039064494e-07, 'eta': 0.016367363698077377, 'C': 1.0372726717406861e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:41,238] Trial 192 finished with value: 0.1649375 and parameters: {'iterations': 3072, 'alpha': 7.775408608844436e-07, 'eta': 0.008958729316374675, 'C': 1.0434815731172099e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:41,552] Trial 193 finished with value: 0.7248125 and parameters: {'iterations': 2622, 'alpha': 2.8054441347635054e-07, 'eta': 0.02389499384162036, 'C': 2.687229008556032e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:41,865] Trial 194 finished with value: 0.5405 and parameters: {'iterations': 2564, 'alpha': 2.971904511786046e-07, 'eta': 0.02582833255717122, 'C': 2.937664832167364e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:42,197] Trial 195 finished with value: 0.6526875 and parameters: {'iterations': 2878, 'alpha': 2.1680702827480233e-07, 'eta': 0.05167766921602173, 'C': 5.361627864812106e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:42,497] Trial 196 finished with value: 0.6525625 and parameters: {'iterations': 2290, 'alpha': 4.6998100273221775e-07, 'eta': 0.013889512294658655, 'C': 1.857488147353058e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:42,819] Trial 197 finished with value: 0.64 and parameters: {'iterations': 2703, 'alpha': 4.4255787030612066e-06, 'eta': 0.020819027150416038, 'C': 3.47556576370217e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:43,171] Trial 198 finished with value: 0.6961875 and parameters: {'iterations': 3321, 'alpha': 2.094432350634524e-06, 'eta': 0.012927874609766427, 'C': 2.3296317682748218e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:43,516] Trial 199 finished with value: 0.72075 and parameters: {'iterations': 3136, 'alpha': 1.4868131619653434e-07, 'eta': 0.03842536638974929, 'C': 1.828998572639652e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:43,863] Trial 200 finished with value: 0.623625 and parameters: {'iterations': 3108, 'alpha': 1.204556284764541e-07, 'eta': 0.06633527047139234, 'C': 5.645526127452023e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:44,195] Trial 201 finished with value: 0.6845625 and parameters: {'iterations': 2827, 'alpha': 1.1073756560525694e-06, 'eta': 0.03827469466040778, 'C': 1.5572423206892533e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:44,559] Trial 202 finished with value: 0.5458125 and parameters: {'iterations': 3430, 'alpha': 6.308686332005991e-07, 'eta': 0.02860676660799352, 'C': 1.0080247789505595e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:44,855] Trial 203 finished with value: 0.678375 and parameters: {'iterations': 2256, 'alpha': 2.965155097578193e-07, 'eta': 0.008441539935260922, 'C': 2.756770188623823e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:45,166] Trial 204 finished with value: 0.67075 and parameters: {'iterations': 2593, 'alpha': 7.501892164011959e-08, 'eta': 0.019038183558879374, 'C': 1.574955385355861e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:45,506] Trial 205 finished with value: 0.7041875 and parameters: {'iterations': 3086, 'alpha': 1.6195007782846504e-07, 'eta': 0.01461835942323147, 'C': 3.958474900106472e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:45,869] Trial 206 finished with value: 0.692 and parameters: {'iterations': 3600, 'alpha': 4.612182973771376e-07, 'eta': 0.006139683731100667, 'C': 1.825301083681859e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:46,170] Trial 207 finished with value: 0.6775 and parameters: {'iterations': 2434, 'alpha': 1.506590669922438e-06, 'eta': 0.02412364789114978, 'C': 2.118129845697896e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:46,500] Trial 208 finished with value: 0.6983125 and parameters: {'iterations': 2910, 'alpha': 8.944673614364741e-07, 'eta': 0.050925301225900825, 'C': 6.443861177505258e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:46,851] Trial 209 finished with value: 0.6565 and parameters: {'iterations': 3324, 'alpha': 2.1107677567234943e-07, 'eta': 0.009785785003479586, 'C': 2.8170303158758334e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:47,163] Trial 210 finished with value: 0.1468125 and parameters: {'iterations': 2693, 'alpha': 3.1746824342700167e-07, 'eta': 0.11159825488091575, 'C': 1.4717625917911512e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:47,559] Trial 211 finished with value: 0.71375 and parameters: {'iterations': 4151, 'alpha': 7.140893581845044e-07, 'eta': 0.006091038447856582, 'C': 1.1424077560466233e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:47,959] Trial 212 finished with value: 0.6798125 and parameters: {'iterations': 4289, 'alpha': 5.476904739553659e-07, 'eta': 0.011709895411660315, 'C': 1.0658589189787699e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:48,326] Trial 213 finished with value: 0.6691875 and parameters: {'iterations': 3603, 'alpha': 1.0183437520770287e-06, 'eta': 0.006015554847062019, 'C': 3.849056988009829e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:48,741] Trial 214 finished with value: 0.5945625 and parameters: {'iterations': 4503, 'alpha': 6.587384850845693e-07, 'eta': 0.016673289427041972, 'C': 1.7032192299571865e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:49,135] Trial 215 finished with value: 0.714375 and parameters: {'iterations': 4071, 'alpha': 2.9373562321897177e-06, 'eta': 0.007829101138132847, 'C': 2.3736313225734928e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:49,524] Trial 216 finished with value: 0.6935625 and parameters: {'iterations': 4074, 'alpha': 2.5016667537094495e-06, 'eta': 0.007566128469833939, 'C': 2.8792804159834957e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:49,901] Trial 217 finished with value: 0.694875 and parameters: {'iterations': 3776, 'alpha': 1.4332115681234776e-06, 'eta': 0.005388764380421104, 'C': 5.239176897824187e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:50,324] Trial 218 finished with value: 0.5229375 and parameters: {'iterations': 4507, 'alpha': 0.48255124838756497, 'eta': 0.03340235150617286, 'C': 2.410017364867005e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:50,674] Trial 219 finished with value: 0.7041875 and parameters: {'iterations': 3260, 'alpha': 3.6893734995914187e-07, 'eta': 0.010006302390604958, 'C': 1.0137041331432175e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:51,069] Trial 220 finished with value: 0.7196875 and parameters: {'iterations': 4118, 'alpha': 1.4062417954327287e-07, 'eta': 0.01976848842796285, 'C': 9.267062555793527e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:51,466] Trial 221 finished with value: 0.7096875 and parameters: {'iterations': 4164, 'alpha': 1.152755698431017e-07, 'eta': 0.024350473573010095, 'C': 4.670266538278034e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:51,838] Trial 222 finished with value: 0.66475 and parameters: {'iterations': 3717, 'alpha': 2.2110633207592296e-07, 'eta': 0.0181171058547217, 'C': 8.191558191614864e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:52,262] Trial 223 finished with value: 0.6779375 and parameters: {'iterations': 4606, 'alpha': 6.920891831189114e-07, 'eta': 0.014271429455528275, 'C': 1.6625582767524306e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:52,662] Trial 224 finished with value: 0.6065 and parameters: {'iterations': 4095, 'alpha': 4.398340808137122e-08, 'eta': 2.756175421537028e-06, 'C': 8.981023645428466e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:53,022] Trial 225 finished with value: 0.698625 and parameters: {'iterations': 3488, 'alpha': 1.603120018319456e-07, 'eta': 0.007636048328780506, 'C': 2.968142468965928e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:53,440] Trial 226 finished with value: 0.6663125 and parameters: {'iterations': 4523, 'alpha': 3.829009244409164e-07, 'eta': 0.04339562519610765, 'C': 2.1346782574364537e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:53,778] Trial 227 finished with value: 0.59475 and parameters: {'iterations': 3040, 'alpha': 9.89034342425083e-07, 'eta': 0.02197988881567998, 'C': 6.392556085210268}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:54,161] Trial 228 finished with value: 0.69825 and parameters: {'iterations': 3946, 'alpha': 2.6994117952477115e-07, 'eta': 0.01116556173563997, 'C': 4.0379227851726704e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:54,517] Trial 229 finished with value: 0.675875 and parameters: {'iterations': 3475, 'alpha': 1.8347106019596246e-06, 'eta': 0.015721507124880213, 'C': 1.0198915311324067e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:54,955] Trial 230 finished with value: 0.717375 and parameters: {'iterations': 4964, 'alpha': 5.014586401430653e-07, 'eta': 0.007230722463561267, 'C': 1.484874426458354e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:55,401] Trial 231 finished with value: 0.628125 and parameters: {'iterations': 4941, 'alpha': 5.141669912649473e-07, 'eta': 2.5691491880803177e-05, 'C': 1.4607092716786982e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:55,807] Trial 232 finished with value: 0.7049375 and parameters: {'iterations': 4331, 'alpha': 6.875254920042099e-07, 'eta': 0.004956492972555741, 'C': 2.3779607038239427e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:56,229] Trial 233 finished with value: 0.708875 and parameters: {'iterations': 4631, 'alpha': 4.282349710097897e-07, 'eta': 0.006885619403620345, 'C': 1.508498479015391e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:56,675] Trial 234 finished with value: 0.720125 and parameters: {'iterations': 4973, 'alpha': 1.2911729012413746e-06, 'eta': 0.010740720443724646, 'C': 1.0094084864407616e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:57,112] Trial 235 finished with value: 0.67125 and parameters: {'iterations': 4874, 'alpha': 3.2840327808251093e-06, 'eta': 0.011309000158970605, 'C': 2.1737335544690647e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:57,555] Trial 236 finished with value: 0.7241875 and parameters: {'iterations': 4999, 'alpha': 1.3315372111589807e-06, 'eta': 0.028948103249148184, 'C': 6.449345372568205e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:57,769] Trial 237 finished with value: 0.6353125 and parameters: {'iterations': 810, 'alpha': 1.2745990161478412e-06, 'eta': 0.02541593054436774, 'C': 5.817057361359139e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:58,186] Trial 238 finished with value: 0.4360625 and parameters: {'iterations': 4541, 'alpha': 1.5707852469321051e-07, 'eta': 0.06952845333351255, 'C': 8.288273832619791e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:58,636] Trial 239 finished with value: 0.6973125 and parameters: {'iterations': 4950, 'alpha': 1.964314420590649e-06, 'eta': 0.03763597054226894, 'C': 1.2278746069955612e-07}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:59,047] Trial 240 finished with value: 0.6020625 and parameters: {'iterations': 4264, 'alpha': 2.5614946339160195e-07, 'eta': 0.029763127918191357, 'C': 3.8256588144427976e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:59,466] Trial 241 finished with value: 0.618625 and parameters: {'iterations': 4451, 'alpha': 1.0140208923958087e-06, 'eta': 0.016986761923722192, 'C': 1.536335238399249e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:52:59,912] Trial 242 finished with value: 0.64525 and parameters: {'iterations': 4946, 'alpha': 5.714100771461441e-07, 'eta': 0.010628534717635385, 'C': 3.0257511937604915e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:00,319] Trial 243 finished with value: 0.5096875 and parameters: {'iterations': 4255, 'alpha': 1.4635393454933666e-06, 'eta': 0.019256828892291872, 'C': 1.981546347391024e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:00,761] Trial 244 finished with value: 0.2050625 and parameters: {'iterations': 4998, 'alpha': 7.601623343268929e-08, 'eta': 0.01322718055076599, 'C': 2.527876086561916e-07}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:01,084] Trial 245 finished with value: 0.647125 and parameters: {'iterations': 2786, 'alpha': 4.0695159648679867e-07, 'eta': 0.028371119795139267, 'C': 5.38120608355336e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:01,469] Trial 246 finished with value: 0.720125 and parameters: {'iterations': 3899, 'alpha': 8.790572742866368e-07, 'eta': 0.008172890104295935, 'C': 1.4345299923951349e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:01,845] Trial 247 finished with value: 0.7224375 and parameters: {'iterations': 3793, 'alpha': 7.059370658956413e-07, 'eta': 0.008269605029212866, 'C': 1.4123168379833013e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:02,218] Trial 248 finished with value: 0.6028125 and parameters: {'iterations': 3762, 'alpha': 8.059440845447262e-07, 'eta': 0.008451758067680837, 'C': 1.4530520103187382e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:02,565] Trial 249 finished with value: 0.715125 and parameters: {'iterations': 3226, 'alpha': 1.1240720902952741e-06, 'eta': 0.009695541505068387, 'C': 3.0140620037403065e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:02,949] Trial 250 finished with value: 0.7049375 and parameters: {'iterations': 3954, 'alpha': 6.373845309794809e-07, 'eta': 0.007880266818145578, 'C': 3.532756781598405e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:03,313] Trial 251 finished with value: 0.24025 and parameters: {'iterations': 3603, 'alpha': 3.446911049925828e-07, 'eta': 0.01437067886206331, 'C': 2.789805944925922e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:03,663] Trial 252 finished with value: 0.6615 and parameters: {'iterations': 3303, 'alpha': 3.1624892331558367e-06, 'eta': 0.009244356369476231, 'C': 6.521142434351821e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:04,060] Trial 253 finished with value: 0.709375 and parameters: {'iterations': 4130, 'alpha': 8.848401956715606e-07, 'eta': 0.00558567493448571, 'C': 4.5968275781751685e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:04,445] Trial 254 finished with value: 0.253625 and parameters: {'iterations': 3729, 'alpha': 4.690828066295672e-07, 'eta': 0.01790653437119519, 'C': 2.1854258685691343e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:04,877] Trial 255 finished with value: 0.6288125 and parameters: {'iterations': 4559, 'alpha': 2.4987781317870274e-07, 'eta': 0.011699321999218951, 'C': 1.4618816080932623e-07}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:05,231] Trial 256 finished with value: 0.7174375 and parameters: {'iterations': 3308, 'alpha': 1.5429832877353744e-06, 'eta': 0.008784845837422008, 'C': 3.05377247152649e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:05,625] Trial 257 finished with value: 0.7128125 and parameters: {'iterations': 4054, 'alpha': 1.90817487339903e-06, 'eta': 0.007685788070872601, 'C': 8.91917164718079e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:05,989] Trial 258 finished with value: 0.7018125 and parameters: {'iterations': 3565, 'alpha': 1.4886969242727844e-07, 'eta': 0.004544109621577415, 'C': 3.2443051664801115e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:06,168] Trial 259 finished with value: 0.577625 and parameters: {'iterations': 140, 'alpha': 1.119283439080715e-06, 'eta': 5.049721688544266e-08, 'C': 4.57781763553499e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:06,578] Trial 260 finished with value: 0.7079375 and parameters: {'iterations': 4376, 'alpha': 2.5108647129319716e-06, 'eta': 0.008411420217667916, 'C': 2.242357829039665e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:06,931] Trial 261 finished with value: 0.681875 and parameters: {'iterations': 3269, 'alpha': 1.5233123742011395e-06, 'eta': 0.006274718938079769, 'C': 6.686533615938858e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:07,309] Trial 262 finished with value: 0.7135 and parameters: {'iterations': 3796, 'alpha': 2.2093813343176842e-07, 'eta': 0.010253263713452665, 'C': 3.369682487266788e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:07,678] Trial 263 finished with value: 0.6955 and parameters: {'iterations': 3619, 'alpha': 9.940009133859271e-08, 'eta': 0.01094576868593978, 'C': 3.2562459661244716e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:08,100] Trial 264 finished with value: 0.7055625 and parameters: {'iterations': 4596, 'alpha': 2.0292285975835946e-07, 'eta': 0.005464113495345817, 'C': 1.8135032432687035e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:08,487] Trial 265 finished with value: 0.71275 and parameters: {'iterations': 3915, 'alpha': 5.15987535474325e-06, 'eta': 0.009172897169035375, 'C': 2.434174558825971e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:08,820] Trial 266 finished with value: 0.6704375 and parameters: {'iterations': 2950, 'alpha': 2.9908777040355156e-07, 'eta': 0.003814501723473974, 'C': 4.1056649945979815e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:09,187] Trial 267 finished with value: 0.722125 and parameters: {'iterations': 3367, 'alpha': 5.442145247893325e-07, 'eta': 0.013001323188013766, 'C': 1.8171519045487093e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:09,544] Trial 268 finished with value: 0.62975 and parameters: {'iterations': 3372, 'alpha': 6.258890540653155e-07, 'eta': 0.013948129626818946, 'C': 1.607597193459981e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:09,876] Trial 269 finished with value: 0.7114375 and parameters: {'iterations': 2864, 'alpha': 1.063707962176819e-06, 'eta': 0.01996302788281568, 'C': 1.4910622713480553e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:10,225] Trial 270 finished with value: 0.689375 and parameters: {'iterations': 3286, 'alpha': 1.6572896545660748e-06, 'eta': 0.007415895951028717, 'C': 2.0085055334088064e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:10,627] Trial 271 finished with value: 0.7 and parameters: {'iterations': 4260, 'alpha': 5.203924320316975e-07, 'eta': 0.012065488125125081, 'C': 2.4536175366078036e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:11,069] Trial 272 finished with value: 0.3250625 and parameters: {'iterations': 4967, 'alpha': 8.718563505398785e-07, 'eta': 0.0056361230599713385, 'C': 1.577639355186853e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:11,411] Trial 273 finished with value: 0.606125 and parameters: {'iterations': 3138, 'alpha': 3.664288946122406e-07, 'eta': 0.025510480754985788, 'C': 1.3973897240418696e-05}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:11,847] Trial 274 finished with value: 0.72625 and parameters: {'iterations': 4470, 'alpha': 1.2262009152309328e-06, 'eta': 0.012946029727351864, 'C': 4.276901938513392e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:12,280] Trial 275 finished with value: 0.6925625 and parameters: {'iterations': 4558, 'alpha': 3.960004011435891e-06, 'eta': 0.014947298720957, 'C': 5.269926889824153e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:12,601] Trial 276 finished with value: 0.5463125 and parameters: {'iterations': 2647, 'alpha': 1.3081650931258561e-06, 'eta': 0.04531933029493046, 'C': 0.08198974255722283}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:13,025] Trial 277 finished with value: 0.7029375 and parameters: {'iterations': 4613, 'alpha': 2.427486150658738e-06, 'eta': 0.020911906676175623, 'C': 4.418925508372789e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:13,391] Trial 278 finished with value: 0.6480625 and parameters: {'iterations': 3566, 'alpha': 1.1014092491730168e-06, 'eta': 0.013406891687856212, 'C': 2.964730427668496e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:13,569] Trial 279 finished with value: 0.602875 and parameters: {'iterations': 57, 'alpha': 1.9290568169034355e-06, 'eta': 0.008522585323240942, 'C': 6.59489939044189e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:13,754] Trial 280 finished with value: 0.673625 and parameters: {'iterations': 114, 'alpha': 5.716585103027186e-07, 'eta': 0.023752260492110145, 'C': 1.156494738295816e-07}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:14,150] Trial 281 finished with value: 0.6310625 and parameters: {'iterations': 3961, 'alpha': 3.837914256167829e-07, 'eta': 0.016425393995243436, 'C': 2.73050368746312e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:14,567] Trial 282 finished with value: 0.716625 and parameters: {'iterations': 4374, 'alpha': 1.5086026895851776e-07, 'eta': 0.011422557061481368, 'C': 1.702986672983671e-07}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:14,996] Trial 283 finished with value: 0.6961875 and parameters: {'iterations': 4545, 'alpha': 9.639996556932324e-08, 'eta': 0.009584365454485292, 'C': 1.0690442405259363e-07}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:15,409] Trial 284 finished with value: 0.248 and parameters: {'iterations': 4264, 'alpha': 5.229817499393911e-08, 'eta': 0.0355083013501945, 'C': 3.61045285101215e-07}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:15,860] Trial 285 finished with value: 0.6755 and parameters: {'iterations': 4997, 'alpha': 2.40238288774536e-07, 'eta': 0.011652113794508664, 'C': 2.1298521707877914e-07}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:16,327] Trial 286 finished with value: 0.657875 and parameters: {'iterations': 4984, 'alpha': 1.3419572583191357e-07, 'eta': 0.02215338074125492, 'C': 1.5334460633759904e-07}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:16,750] Trial 287 finished with value: 0.614125 and parameters: {'iterations': 4548, 'alpha': 1.505990855894728e-07, 'eta': 1.0535817521637546e-05, 'C': 4.414393697751662e-07}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:17,078] Trial 288 finished with value: 0.6743125 and parameters: {'iterations': 2828, 'alpha': 9.344926124999791e-07, 'eta': 0.006755814512702788, 'C': 2.182291884028237e-07}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:17,280] Trial 289 finished with value: 0.6705625 and parameters: {'iterations': 508, 'alpha': 1.723671912286961e-06, 'eta': 0.00439190240443653, 'C': 8.425996525246759e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:17,687] Trial 290 finished with value: 0.704375 and parameters: {'iterations': 4184, 'alpha': 2.754737359249509e-06, 'eta': 0.015832914993261678, 'C': 5.729252893465227e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:17,870] Trial 291 finished with value: 0.5670625 and parameters: {'iterations': 210, 'alpha': 7.267523910539247e-07, 'eta': 0.06062473603648997, 'C': 2.0098000492080024e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:18,219] Trial 292 finished with value: 0.591 and parameters: {'iterations': 3236, 'alpha': 7.694959975101836e-08, 'eta': 2.518379815878966e-07, 'C': 0.6061184609879016}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:18,600] Trial 293 finished with value: 0.7099375 and parameters: {'iterations': 3815, 'alpha': 1.808140105676464e-07, 'eta': 0.009540549521972046, 'C': 6.354241206019126e-07}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:19,021] Trial 294 finished with value: 0.697875 and parameters: {'iterations': 4513, 'alpha': 1.299273356124615e-06, 'eta': 0.029929815228271667, 'C': 4.380411252163034e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:19,468] Trial 295 finished with value: 0.5145 and parameters: {'iterations': 4994, 'alpha': 3.114963078863354e-07, 'eta': 0.0033785873005350983, 'C': 3.137215804479085e-07}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:19,828] Trial 296 finished with value: 0.3255 and parameters: {'iterations': 3430, 'alpha': 2.370744392765991e-08, 'eta': 0.0071519246107844625, 'C': 1.3096718449215443e-07}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:20,139] Trial 297 finished with value: 0.4094375 and parameters: {'iterations': 2505, 'alpha': 5.219436476988262e-07, 'eta': 0.017548737732939034, 'C': 1.4049226151767371e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:20,483] Trial 298 finished with value: 0.7126875 and parameters: {'iterations': 3046, 'alpha': 8.529231706946383e-07, 'eta': 0.01000069763720447, 'C': 3.89854348552305e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:20,882] Trial 299 finished with value: 0.675375 and parameters: {'iterations': 4139, 'alpha': 2.172193090141092e-07, 'eta': 0.02484090545693736, 'C': 7.438022803779972e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:21,278] Trial 300 finished with value: 0.636875 and parameters: {'iterations': 3855, 'alpha': 1.5019997849984209e-06, 'eta': 0.042286317638346324, 'C': 1.870538954914282e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:21,691] Trial 301 finished with value: 0.701625 and parameters: {'iterations': 4428, 'alpha': 3.546759750601888e-07, 'eta': 0.013254358733246744, 'C': 2.986428801086365e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:22,024] Trial 302 finished with value: 0.701125 and parameters: {'iterations': 2785, 'alpha': 2.3193321149628615e-06, 'eta': 0.005218677607493303, 'C': 1.4310209591679651e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:22,478] Trial 303 finished with value: 0.701625 and parameters: {'iterations': 4999, 'alpha': 6.876659864037421e-07, 'eta': 0.00816814081516505, 'C': 5.7675678865871056e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:22,863] Trial 304 finished with value: 0.6991875 and parameters: {'iterations': 3620, 'alpha': 1.3112525874489508e-07, 'eta': 0.01889731063468878, 'C': 9.06755557241098e-07}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:23,275] Trial 305 finished with value: 0.684125 and parameters: {'iterations': 4199, 'alpha': 9.359120455175873e-07, 'eta': 0.012790371547454245, 'C': 1.0336848968574436e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:23,619] Trial 306 finished with value: 0.6630625 and parameters: {'iterations': 3118, 'alpha': 5.381801001550013e-07, 'eta': 0.028010617892818928, 'C': 1.6871446054586846e-07}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:24,051] Trial 307 finished with value: 0.698375 and parameters: {'iterations': 4616, 'alpha': 2.997114008220633e-07, 'eta': 0.002808387854534707, 'C': 2.0069964491545298e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:24,438] Trial 308 finished with value: 0.712125 and parameters: {'iterations': 3926, 'alpha': 1.1131527283999243e-06, 'eta': 0.006586197480169718, 'C': 3.367498209784544e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:24,797] Trial 309 finished with value: 0.6774375 and parameters: {'iterations': 3356, 'alpha': 3.728166857213921e-06, 'eta': 0.010799862830942869, 'C': 9.321236950648469e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:25,112] Trial 310 finished with value: 0.710125 and parameters: {'iterations': 2658, 'alpha': 1.9902913657590693e-07, 'eta': 0.017889564317443797, 'C': 2.2205721296356044e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:25,309] Trial 311 finished with value: 0.6174375 and parameters: {'iterations': 430, 'alpha': 4.5915387515105875e-07, 'eta': 0.00025783152551746275, 'C': 1.0058039204288912e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:25,740] Trial 312 finished with value: 0.7015625 and parameters: {'iterations': 4309, 'alpha': 1.5904175094544568e-06, 'eta': 0.004471413077225527, 'C': 0.007773763083853797}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:26,124] Trial 313 finished with value: 0.5780625 and parameters: {'iterations': 3634, 'alpha': 6.362980659067385e-08, 'eta': 0.04108896071700994, 'C': 4.079363904131965e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:26,559] Trial 314 finished with value: 0.2458125 and parameters: {'iterations': 4580, 'alpha': 5.828781022704359e-06, 'eta': 0.008452633026823014, 'C': 1.6234432882177744e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:26,963] Trial 315 finished with value: 0.6855 and parameters: {'iterations': 3947, 'alpha': 7.315026473758285e-07, 'eta': 0.00010639048970047679, 'C': 5.1174823670307784e-06}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:27,306] Trial 316 finished with value: 0.699625 and parameters: {'iterations': 2963, 'alpha': 1.1358175028232823e-06, 'eta': 0.013624377053228824, 'C': 8.979701180891903e-06}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:27,677] Trial 317 finished with value: 0.562375 and parameters: {'iterations': 3449, 'alpha': 3.3070248710876156e-07, 'eta': 0.022307122320947507, 'C': 5.3696978780960765e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:27,869] Trial 318 finished with value: 0.5205625 and parameters: {'iterations': 325, 'alpha': 1.0585455617040079e-07, 'eta': 0.08337281657913416, 'C': 2.3110247270981913e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:28,303] Trial 319 finished with value: 0.5948125 and parameters: {'iterations': 4643, 'alpha': 1.4574439776910697, 'eta': 0.006935594824783934, 'C': 3.564247592250386e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:28,607] Trial 320 finished with value: 0.6745 and parameters: {'iterations': 2382, 'alpha': 2.1133630826592433e-06, 'eta': 0.010619331991917119, 'C': 1.4725892847518298e-08}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:29,017] Trial 321 finished with value: 0.7235625 and parameters: {'iterations': 4249, 'alpha': 5.343876826857606e-07, 'eta': 0.015087014298621415, 'C': 5.167928892062976e-07}. Best is trial 162 with value: 0.7263125.
[I 2022-10-18 21:53:29,422] Trial 322 finished with value: 0.7349375 and parameters: {'iterations': 4231, 'alpha': 5.977857221378806e-07, 'eta': 0.03164897323160908, 'C': 9.4619346544479e-08}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:29,826] Trial 323 finished with value: 0.6538125 and parameters: {'iterations': 4139, 'alpha': 5.537760193483731e-07, 'eta': 0.033705124757671664, 'C': 1.1860223254225645e-07}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:30,175] Trial 324 finished with value: 0.705375 and parameters: {'iterations': 3197, 'alpha': 7.653582273223602e-07, 'eta': 0.043281300461395764, 'C': 1.7891890246736023e-07}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:30,579] Trial 325 finished with value: 0.4690625 and parameters: {'iterations': 4213, 'alpha': 0.1321787130796877, 'eta': 0.055128670888540475, 'C': 7.10265011874394e-08}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:30,953] Trial 326 finished with value: 0.667625 and parameters: {'iterations': 3609, 'alpha': 1.1824592765145674e-06, 'eta': 0.028555256801101114, 'C': 2.81648360132322e-07}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:31,136] Trial 327 finished with value: 0.635125 and parameters: {'iterations': 156, 'alpha': 4.999348382896606e-07, 'eta': 0.02241363208481924, 'C': 2.725766997352316e-08}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:31,455] Trial 328 finished with value: 0.59475 and parameters: {'iterations': 2641, 'alpha': 3.9218837555625575e-07, 'eta': 0.4900470419773244, 'C': 1.636468335193369e-08}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:31,788] Trial 329 finished with value: 0.4779375 and parameters: {'iterations': 2911, 'alpha': 8.160414630346929e-07, 'eta': 0.016807162150619403, 'C': 1.0033661660621773e-08}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:32,210] Trial 330 finished with value: 0.6758125 and parameters: {'iterations': 4520, 'alpha': 1.6659030258255057e-06, 'eta': 0.02891843517047509, 'C': 3.0784198779863813e-06}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:32,659] Trial 331 finished with value: 0.5585625 and parameters: {'iterations': 5000, 'alpha': 6.55741030229471e-07, 'eta': 0.05724614603626393, 'C': 1.0977175824725573e-07}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:33,042] Trial 332 finished with value: 0.7090625 and parameters: {'iterations': 3871, 'alpha': 2.971178349884727e-06, 'eta': 0.01767412779039282, 'C': 2.2108373196675738e-08}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:33,229] Trial 333 finished with value: 0.6584375 and parameters: {'iterations': 237, 'alpha': 1.194923480650982e-06, 'eta': 0.01418817076869791, 'C': 5.789210668150107e-07}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:33,589] Trial 334 finished with value: 0.698625 and parameters: {'iterations': 3249, 'alpha': 2.673111838407686e-07, 'eta': 0.022808963308792566, 'C': 4.238727748280594e-08}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:34,005] Trial 335 finished with value: 0.6858125 and parameters: {'iterations': 4230, 'alpha': 4.414583726125924e-07, 'eta': 0.033544004128003575, 'C': 1.4145908856983048e-08}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:34,374] Trial 336 finished with value: 0.7129375 and parameters: {'iterations': 3538, 'alpha': 1.681567204709351e-07, 'eta': 0.013544922328635998, 'C': 7.986101481874254e-08}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:34,766] Trial 337 finished with value: 0.368875 and parameters: {'iterations': 3894, 'alpha': 8.511239839146944e-07, 'eta': 0.12024600933398921, 'C': 2.8801919054410312e-08}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:35,212] Trial 338 finished with value: 0.6439375 and parameters: {'iterations': 4562, 'alpha': 1.7096856126035891e-06, 'eta': 0.02031199998623721, 'C': 1.910204626207488e-07}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:35,568] Trial 339 finished with value: 0.5899375 and parameters: {'iterations': 2975, 'alpha': 5.975338197633311e-07, 'eta': 1.3531876947323085e-06, 'C': 5.6700479188903814e-08}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:35,868] Trial 340 finished with value: 0.6586875 and parameters: {'iterations': 2202, 'alpha': 3.5005110406002035e-07, 'eta': 0.009948367630308406, 'C': 2.0718353805446668e-08}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:36,322] Trial 341 finished with value: 0.6884375 and parameters: {'iterations': 4993, 'alpha': 1.1425808541273451e-06, 'eta': 0.03983417543741702, 'C': 3.13681594002911e-07}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:36,730] Trial 342 finished with value: 0.7194375 and parameters: {'iterations': 4205, 'alpha': 2.395526144790711e-06, 'eta': 0.012280610447952344, 'C': 1.4566725237186676e-08}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:37,145] Trial 343 finished with value: 0.7116875 and parameters: {'iterations': 4359, 'alpha': 2.3309917025694916e-07, 'eta': 0.015058300821808267, 'C': 1.0047059177695758e-08}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:37,463] Trial 344 finished with value: 0.685375 and parameters: {'iterations': 2607, 'alpha': 4.7365250824427986e-07, 'eta': 0.023940276265539656, 'C': 1.4843564487636317e-08}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:37,846] Trial 345 finished with value: 0.5908125 and parameters: {'iterations': 3695, 'alpha': 8.594567026690157e-07, 'eta': 1.3632739844740676e-08, 'C': 3.318356842593375e-08}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:38,279] Trial 346 finished with value: 0.72375 and parameters: {'iterations': 4578, 'alpha': 2.2173834128193317e-06, 'eta': 0.011992121756552846, 'C': 1.0026108113739793e-06}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:38,709] Trial 347 finished with value: 0.597875 and parameters: {'iterations': 4621, 'alpha': 4.566898166186679e-06, 'eta': 0.017633373231123357, 'C': 1.1621602608385983e-06}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:39,127] Trial 348 finished with value: 0.5620625 and parameters: {'iterations': 4357, 'alpha': 1.8906647723769232e-06, 'eta': 0.07274373908134524, 'C': 1.7120135793898644e-06}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:39,346] Trial 349 finished with value: 0.5935 and parameters: {'iterations': 687, 'alpha': 2.4594715953776504e-06, 'eta': 0.03156484310265395, 'C': 8.188849334392832e-07}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:39,556] Trial 350 finished with value: 0.697375 and parameters: {'iterations': 599, 'alpha': 1.4026211792776277e-07, 'eta': 0.011841364356665976, 'C': 4.695877674412651e-07}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:39,957] Trial 351 finished with value: 0.5475625 and parameters: {'iterations': 4052, 'alpha': 2.8394849888479844e-07, 'eta': 0.0062915207978172585, 'C': 8.559371932925324e-07}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:40,390] Trial 352 finished with value: 0.705125 and parameters: {'iterations': 4559, 'alpha': 9.130067773280798e-08, 'eta': 0.02057695072094465, 'C': 1.270728113262162e-07}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:40,836] Trial 353 finished with value: 0.722875 and parameters: {'iterations': 4647, 'alpha': 1.4508660113567658e-06, 'eta': 0.009304496289690744, 'C': 2.6909482455472653e-05}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:41,273] Trial 354 finished with value: 0.26925 and parameters: {'iterations': 4659, 'alpha': 3.521912382676462e-06, 'eta': 0.04975345897748574, 'C': 4.145842175825556e-05}. Best is trial 322 with value: 0.7349375.
[I 2022-10-18 21:53:41,728] Trial 355 finished with value: 0.695375 and parameters: {'iterations': 4987, 'alpha': 7.91284740829999e-06, 'eta': 0.013620546855952894, 'C': 2.72127602168996e-06}. Best is trial 322 with value: 0.7349375.
In [97]:
from optuna.visualization import plot_contour
from optuna.visualization import plot_edf
from optuna.visualization import plot_intermediate_values
from optuna.visualization import plot_optimization_history
from optuna.visualization import plot_parallel_coordinate
from optuna.visualization import plot_param_importances
from optuna.visualization import plot_slice
import plotly.io as pio
pio.renderers.default = 'iframe' # or 'notebook' or 'colab' or 'jupyterlab'
In [135]:
def get_sample_weight(y):
    sw = [len(y)/(len(np.unique(y))*i) for i in np.bincount(y) ]
    return np.array([sw[j] for j in y])

study_name = "xgboost"  # Unique identifier of the study.
CV_RESULT_DIR = os.getcwd()+f"/{study_name}/"
if not os.path.exists(CV_RESULT_DIR):  os.mkdir(CV_RESULT_DIR)
storage_name = "sqlite:///{}.db".format(study_name)


def objective(trial):
    param = {
        "num_class":3,
        "verbosity": 0,
        "objective": "multi:softprob",
        "eval_metric": "auc",
        "tree_method": "hist",
        "booster": trial.suggest_categorical("booster", ["gbtree", "gblinear", "dart"]),
        "lambda": trial.suggest_float("lambda", 1e-8, 1.0, log=True),
        "alpha": trial.suggest_float("alpha", 1e-8, 1.0, log=True),
    }

    if param["booster"] == "gbtree" or param["booster"] == "dart":
        param["max_depth"] = trial.suggest_int("max_depth", 1, 9)
        param["eta"] = trial.suggest_float("eta", 1e-8, 1.0, log=True)
        param["gamma"] = trial.suggest_float("gamma", 1e-8, 1.0, log=True)
        param["grow_policy"] = trial.suggest_categorical("grow_policy", ["depthwise", "lossguide"])
    if param["booster"] == "dart":
        param["sample_type"] = trial.suggest_categorical("sample_type", ["uniform", "weighted"])
        param["normalize_type"] = trial.suggest_categorical("normalize_type", ["tree", "forest"])
        param["rate_drop"] = trial.suggest_float("rate_drop", 1e-8, 1.0, log=True)
        param["skip_drop"] = trial.suggest_float("skip_drop", 1e-8, 1.0, log=True)

    results_dict={}
    evals_result = {}
    dtrain = xgb.DMatrix(x_train, y_train,get_sample_weight(y_train))
    dtest = xgb.DMatrix(x_train1, y_train1, get_sample_weight(y_train1))
    dtest_unseen = xgb.DMatrix(x_test1, y_test1,get_sample_weight(y_test1) )
    
    pruning_callback = optuna.integration.XGBoostPruningCallback(trial, "test-auc")
    watchlist = [(dtest, 'test'), (dtrain, 'train')]
    xgbc0 = xgb.train(param, dtrain,evals=watchlist,callbacks=[pruning_callback], evals_result=evals_result)
    
    pd.DataFrame.from_dict({'test-auc':evals_result['test']['auc'],'train-auc':evals_result['train']['auc']}).to_csv(CV_RESULT_DIR+f'{trial.number}.csv')
    trial.set_user_attr("n_estimators", xgbc0.best_iteration)
    
#     test_score = xgbc0.predict(dtest_unseen)#y_dftest
#     train_score = xgbc0.predict(dtrain) #y_train
#     accuracy_test = sklearn.metrics.accuracy_score(y_dftest, np.rint(test_score))
#     accuracy_train = sklearn.metrics.accuracy_score(y_train, np.rint(train_score))
    
    
#     best_score = evals_result['test']['logloss'][-1]
#     print('best_score',best_score)
#     test_train_result = [abs(evals_result['test']['logloss'][-1]),abs(evals_result['train']['logloss'][-1])]
#     dratio_logloss = min(test_train_result)/max(test_train_result)
    
#     if (accuracy_test/accuracy_train)*100.0 < 93.0: 
#         print(f' Trial Pruned test/train Accuracy :  {(accuracy_test/accuracy_train)*100.0}')
#         raise optuna.TrialPruned()
#     preds = xgbc0.predict(dtest)
#     pred_labels = np.rint(preds)
#     accuracy = accuracy_score(y_train1, pred_labels)
#     return accuracy
    
    return evals_result['test']['auc'][-1]
    
    


pruner = optuna.pruners.MedianPruner(n_warmup_steps=5)
# pruner = optuna.pruners.HyperbandPruner()
study = optuna.create_study(direction="maximize",pruner=pruner,storage=storage_name,study_name=study_name)
study.optimize(objective, n_trials=100)

print("Best trial:")
trial = study.best_trial

print("  Value: {}".format(trial.value))

print("  Params: ")
for key, value in trial.params.items():
    print("    {}: {}".format(key, value))

print("  Number of estimators: {}".format(trial.user_attrs["n_estimators"]))
[I 2022-10-17 04:03:21,168] A new study created in RDB with name: xgboost
[0]	test-auc:0.93973	train-auc:0.95148
[1]	test-auc:0.93973	train-auc:0.95149
[2]	test-auc:0.93974	train-auc:0.95149
[3]	test-auc:0.93973	train-auc:0.95149
[4]	test-auc:0.93973	train-auc:0.95149
[5]	test-auc:0.93974	train-auc:0.95150
[6]	test-auc:0.93974	train-auc:0.95149
[7]	test-auc:0.93973	train-auc:0.95149
[8]	test-auc:0.93974	train-auc:0.95149
[9]	test-auc:0.93974	train-auc:0.95149
[I 2022-10-17 04:03:22,263] Trial 0 finished with value: 0.939737 and parameters: {'booster': 'gbtree', 'lambda': 0.029929016671248976, 'alpha': 9.774316399748477e-08, 'max_depth': 9, 'eta': 5.172935877345119e-05, 'gamma': 7.93117045701693e-05, 'grow_policy': 'depthwise'}. Best is trial 0 with value: 0.939737.
[0]	test-auc:0.50000	train-auc:0.50000
[1]	test-auc:0.50000	train-auc:0.50000
[2]	test-auc:0.50000	train-auc:0.50000
[3]	test-auc:0.50000	train-auc:0.50000
[4]	test-auc:0.50000	train-auc:0.50000
[5]	test-auc:0.50000	train-auc:0.50000
[6]	test-auc:0.50000	train-auc:0.50000
[7]	test-auc:0.50000	train-auc:0.50000
[8]	test-auc:0.50000	train-auc:0.50000
[9]	test-auc:0.50000	train-auc:0.50000
[I 2022-10-17 04:03:22,828] Trial 1 finished with value: 0.5 and parameters: {'booster': 'gblinear', 'lambda': 1.4524401736224413e-05, 'alpha': 0.6981427405998304}. Best is trial 0 with value: 0.939737.
[0]	test-auc:0.79888	train-auc:0.79969
[1]	test-auc:0.80307	train-auc:0.80327
[2]	test-auc:0.80343	train-auc:0.80357
[3]	test-auc:0.80305	train-auc:0.80344
[4]	test-auc:0.80202	train-auc:0.80280
[5]	test-auc:0.80068	train-auc:0.80192
[6]	test-auc:0.79930	train-auc:0.80098
[7]	test-auc:0.79801	train-auc:0.80007
[8]	test-auc:0.79680	train-auc:0.79921
[9]	test-auc:0.79574	train-auc:0.79841
[I 2022-10-17 04:03:23,589] Trial 2 finished with value: 0.795736 and parameters: {'booster': 'gblinear', 'lambda': 0.015054117144604537, 'alpha': 0.06044517024895472}. Best is trial 0 with value: 0.939737.
[0]	test-auc:0.79569	train-auc:0.79766
[1]	test-auc:0.80775	train-auc:0.80896
[2]	test-auc:0.82373	train-auc:0.82270
[3]	test-auc:0.83056	train-auc:0.82884
[4]	test-auc:0.83486	train-auc:0.83303
[5]	test-auc:0.83746	train-auc:0.83573
[6]	test-auc:0.83909	train-auc:0.83749
[7]	test-auc:0.84008	train-auc:0.83866
[8]	test-auc:0.84069	train-auc:0.83944
[9]	test-auc:0.84104	train-auc:0.83995
[I 2022-10-17 04:03:24,310] Trial 3 finished with value: 0.84104 and parameters: {'booster': 'gblinear', 'lambda': 4.822595462588059e-05, 'alpha': 8.87827840439803e-06}. Best is trial 0 with value: 0.939737.
[0]	test-auc:0.89812	train-auc:0.89878
[1]	test-auc:0.89826	train-auc:0.89897
[2]	test-auc:0.89869	train-auc:0.89937
[3]	test-auc:0.89940	train-auc:0.90001
[4]	test-auc:0.89943	train-auc:0.90000
[5]	test-auc:0.90076	train-auc:0.90136
[6]	test-auc:0.90117	train-auc:0.90186
[7]	test-auc:0.90232	train-auc:0.90294
[8]	test-auc:0.90259	train-auc:0.90316
[9]	test-auc:0.90299	train-auc:0.90358
[I 2022-10-17 04:03:25,123] Trial 4 finished with value: 0.902992 and parameters: {'booster': 'gbtree', 'lambda': 1.920491853756488e-08, 'alpha': 0.0104091166623864, 'max_depth': 5, 'eta': 0.012770228494486974, 'gamma': 0.00016550072847008282, 'grow_policy': 'depthwise'}. Best is trial 0 with value: 0.939737.
[0]	test-auc:0.84522	train-auc:0.84455
[1]	test-auc:0.84525	train-auc:0.84458
[2]	test-auc:0.84527	train-auc:0.84461
[3]	test-auc:0.84528	train-auc:0.84462
[4]	test-auc:0.84525	train-auc:0.84458
[5]	test-auc:0.84525	train-auc:0.84458
[6]	test-auc:0.84525	train-auc:0.84458
[7]	test-auc:0.84525	train-auc:0.84458
[8]	test-auc:0.84525	train-auc:0.84459
[9]	test-auc:0.84525	train-auc:0.84459
[I 2022-10-17 04:03:26,128] Trial 5 finished with value: 0.845248 and parameters: {'booster': 'dart', 'lambda': 1.0775354501869274e-06, 'alpha': 2.9461456656855947e-08, 'max_depth': 3, 'eta': 1.8923357664933008e-05, 'gamma': 0.006812301811608997, 'grow_policy': 'depthwise', 'sample_type': 'weighted', 'normalize_type': 'tree', 'rate_drop': 0.09210434009039925, 'skip_drop': 1.8692991938303875e-08}. Best is trial 0 with value: 0.939737.
[0]	test-auc:0.78303	train-auc:0.78574
[1]	test-auc:0.78180	train-auc:0.78475
[2]	test-auc:0.78093	train-auc:0.78416
[3]	test-auc:0.78002	train-auc:0.78344
[4]	test-auc:0.77915	train-auc:0.78271
[5]	test-auc:0.77843	train-auc:0.78209
[I 2022-10-17 04:03:26,573] Trial 6 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.91276	train-auc:0.91535
[1]	test-auc:0.91280	train-auc:0.91535
[2]	test-auc:0.91280	train-auc:0.91535
[3]	test-auc:0.91280	train-auc:0.91535
[4]	test-auc:0.91280	train-auc:0.91535
[5]	test-auc:0.91280	train-auc:0.91535
[6]	test-auc:0.91280	train-auc:0.91535
[7]	test-auc:0.91280	train-auc:0.91535
[8]	test-auc:0.91280	train-auc:0.91535
[9]	test-auc:0.91280	train-auc:0.91535
[I 2022-10-17 04:03:27,412] Trial 7 finished with value: 0.912801 and parameters: {'booster': 'gbtree', 'lambda': 0.0005296063513353699, 'alpha': 0.043558552861284724, 'max_depth': 6, 'eta': 0.00013861198910242693, 'gamma': 0.6879865352108832, 'grow_policy': 'depthwise'}. Best is trial 0 with value: 0.939737.
[0]	test-auc:0.79739	train-auc:0.79897
[1]	test-auc:0.81197	train-auc:0.81255
[2]	test-auc:0.81944	train-auc:0.81924
[3]	test-auc:0.82353	train-auc:0.82296
[4]	test-auc:0.82553	train-auc:0.82482
[I 2022-10-17 04:03:27,856] Trial 8 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.79570	train-auc:0.79766
[1]	test-auc:0.80776	train-auc:0.80897
[2]	test-auc:0.82375	train-auc:0.82271
[3]	test-auc:0.83056	train-auc:0.82884
[4]	test-auc:0.83487	train-auc:0.83303
[5]	test-auc:0.83748	train-auc:0.83574
[I 2022-10-17 04:03:28,349] Trial 9 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.74040	train-auc:0.75400
[1]	test-auc:0.89248	train-auc:0.91200
[2]	test-auc:0.89234	train-auc:0.91195
[3]	test-auc:0.91037	train-auc:0.92801
[4]	test-auc:0.90269	train-auc:0.92210
[5]	test-auc:0.90405	train-auc:0.92113
[6]	test-auc:0.90541	train-auc:0.92221
[7]	test-auc:0.90616	train-auc:0.92332
[8]	test-auc:0.90476	train-auc:0.92221
[9]	test-auc:0.90491	train-auc:0.92242
[I 2022-10-17 04:03:30,354] Trial 10 finished with value: 0.904909 and parameters: {'booster': 'gbtree', 'lambda': 0.24849615104822945, 'alpha': 1.2741973181755788e-08, 'max_depth': 9, 'eta': 2.7361886738355464e-08, 'gamma': 4.404229256753266e-08, 'grow_policy': 'lossguide'}. Best is trial 0 with value: 0.939737.
[0]	test-auc:0.93939	train-auc:0.95123
[1]	test-auc:0.93940	train-auc:0.95124
[2]	test-auc:0.93941	train-auc:0.95124
[3]	test-auc:0.93941	train-auc:0.95125
[4]	test-auc:0.93941	train-auc:0.95125
[5]	test-auc:0.93941	train-auc:0.95125
[6]	test-auc:0.93940	train-auc:0.95125
[7]	test-auc:0.93940	train-auc:0.95125
[8]	test-auc:0.93941	train-auc:0.95125
[9]	test-auc:0.93942	train-auc:0.95128
[I 2022-10-17 04:03:31,449] Trial 11 finished with value: 0.939425 and parameters: {'booster': 'gbtree', 'lambda': 0.0023355893179029274, 'alpha': 7.992873460887915e-07, 'max_depth': 9, 'eta': 9.969088857192141e-05, 'gamma': 0.7597453116262533, 'grow_policy': 'depthwise'}. Best is trial 0 with value: 0.939737.
[0]	test-auc:0.93978	train-auc:0.95149
[1]	test-auc:0.93979	train-auc:0.95150
[2]	test-auc:0.93979	train-auc:0.95151
[3]	test-auc:0.93979	train-auc:0.95151
[4]	test-auc:0.93978	train-auc:0.95151
[5]	test-auc:0.93978	train-auc:0.95152
[6]	test-auc:0.93979	train-auc:0.95151
[7]	test-auc:0.93978	train-auc:0.95151
[8]	test-auc:0.93979	train-auc:0.95151
[9]	test-auc:0.93979	train-auc:0.95151
[I 2022-10-17 04:03:32,536] Trial 12 finished with value: 0.939788 and parameters: {'booster': 'gbtree', 'lambda': 0.02139073130615766, 'alpha': 8.889950895611086e-07, 'max_depth': 9, 'eta': 3.5997216261509895e-05, 'gamma': 1.8527570993146753e-06, 'grow_policy': 'depthwise'}. Best is trial 12 with value: 0.939788.
[0]	test-auc:0.92276	train-auc:0.92847
[1]	test-auc:0.92306	train-auc:0.92868
[2]	test-auc:0.92324	train-auc:0.92892
[3]	test-auc:0.92322	train-auc:0.92889
[4]	test-auc:0.92322	train-auc:0.92893
[5]	test-auc:0.92321	train-auc:0.92895
[6]	test-auc:0.92323	train-auc:0.92894
[7]	test-auc:0.92325	train-auc:0.92896
[8]	test-auc:0.92324	train-auc:0.92894
[9]	test-auc:0.92323	train-auc:0.92895
[I 2022-10-17 04:03:33,440] Trial 13 finished with value: 0.923232 and parameters: {'booster': 'gbtree', 'lambda': 0.031816215016409716, 'alpha': 4.2397762517946916e-07, 'max_depth': 7, 'eta': 1.3533974442157944e-06, 'gamma': 5.955422459854445e-07, 'grow_policy': 'depthwise'}. Best is trial 12 with value: 0.939788.
[0]	test-auc:0.93974	train-auc:0.95147
[1]	test-auc:0.93979	train-auc:0.95156
[2]	test-auc:0.93983	train-auc:0.95157
[3]	test-auc:0.93995	train-auc:0.95182
[4]	test-auc:0.94015	train-auc:0.95191
[5]	test-auc:0.94016	train-auc:0.95194
[6]	test-auc:0.94024	train-auc:0.95203
[7]	test-auc:0.94025	train-auc:0.95205
[8]	test-auc:0.94025	train-auc:0.95209
[9]	test-auc:0.94026	train-auc:0.95209
[I 2022-10-17 04:03:35,868] Trial 14 finished with value: 0.940263 and parameters: {'booster': 'dart', 'lambda': 0.04262609990385477, 'alpha': 6.494018249460939e-07, 'max_depth': 9, 'eta': 0.0025869505538236846, 'gamma': 8.57071795569794e-06, 'grow_policy': 'lossguide', 'sample_type': 'uniform', 'normalize_type': 'forest', 'rate_drop': 1.0753813202950642e-08, 'skip_drop': 0.5216752647108062}. Best is trial 14 with value: 0.940263.
[0]	test-auc:0.92341	train-auc:0.92908
[1]	test-auc:0.92715	train-auc:0.93195
[2]	test-auc:0.92831	train-auc:0.93347
[3]	test-auc:0.93059	train-auc:0.93513
[4]	test-auc:0.93238	train-auc:0.93668
[5]	test-auc:0.93348	train-auc:0.93773
[6]	test-auc:0.93454	train-auc:0.93875
[7]	test-auc:0.93525	train-auc:0.93957
[8]	test-auc:0.93594	train-auc:0.94028
[9]	test-auc:0.93671	train-auc:0.94098
[I 2022-10-17 04:03:37,398] Trial 15 finished with value: 0.936707 and parameters: {'booster': 'dart', 'lambda': 0.12188581672416318, 'alpha': 3.096265215100841e-06, 'max_depth': 7, 'eta': 0.07198810635451768, 'gamma': 1.4679188024141072e-06, 'grow_policy': 'lossguide', 'sample_type': 'uniform', 'normalize_type': 'forest', 'rate_drop': 2.5293496980404144e-08, 'skip_drop': 0.2472041924685039}. Best is trial 14 with value: 0.940263.
[0]	test-auc:0.79419	train-auc:0.80047
[1]	test-auc:0.79419	train-auc:0.80047
[2]	test-auc:0.79444	train-auc:0.80007
[3]	test-auc:0.79496	train-auc:0.80116
[4]	test-auc:0.79439	train-auc:0.79999
[5]	test-auc:0.79506	train-auc:0.80127
[I 2022-10-17 04:03:38,084] Trial 16 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.93277	train-auc:0.94151
[1]	test-auc:0.93281	train-auc:0.94159
[2]	test-auc:0.93280	train-auc:0.94160
[3]	test-auc:0.93282	train-auc:0.94162
[4]	test-auc:0.93281	train-auc:0.94162
[5]	test-auc:0.93290	train-auc:0.94165
[6]	test-auc:0.93323	train-auc:0.94186
[7]	test-auc:0.93333	train-auc:0.94196
[8]	test-auc:0.93333	train-auc:0.94196
[9]	test-auc:0.93342	train-auc:0.94203
[I 2022-10-17 04:03:39,944] Trial 17 finished with value: 0.933425 and parameters: {'booster': 'dart', 'lambda': 0.0002570744570546526, 'alpha': 1.7076594656625825e-05, 'max_depth': 8, 'eta': 0.0016405998211394405, 'gamma': 2.4784977814526482e-08, 'grow_policy': 'lossguide', 'sample_type': 'uniform', 'normalize_type': 'forest', 'rate_drop': 4.408812455689094e-05, 'skip_drop': 0.00016659879954985978}. Best is trial 14 with value: 0.940263.
[0]	test-auc:0.87816	train-auc:0.87915
[1]	test-auc:0.88893	train-auc:0.88917
[2]	test-auc:0.89713	train-auc:0.89726
[3]	test-auc:0.90338	train-auc:0.90429
[4]	test-auc:0.90782	train-auc:0.90865
[I 2022-10-17 04:03:40,688] Trial 18 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.92243	train-auc:0.92820
[1]	test-auc:0.92285	train-auc:0.92870
[2]	test-auc:0.92310	train-auc:0.92881
[3]	test-auc:0.92315	train-auc:0.92885
[4]	test-auc:0.92316	train-auc:0.92885
[5]	test-auc:0.92321	train-auc:0.92892
[6]	test-auc:0.92319	train-auc:0.92887
[7]	test-auc:0.92322	train-auc:0.92889
[8]	test-auc:0.92322	train-auc:0.92888
[9]	test-auc:0.92320	train-auc:0.92888
[I 2022-10-17 04:03:42,144] Trial 19 finished with value: 0.923199 and parameters: {'booster': 'dart', 'lambda': 0.012056772316570359, 'alpha': 2.5803243591880236e-06, 'max_depth': 7, 'eta': 1.1524262807170689e-06, 'gamma': 1.4042835088728297e-05, 'grow_policy': 'lossguide', 'sample_type': 'uniform', 'normalize_type': 'forest', 'rate_drop': 0.5961843601302141, 'skip_drop': 1.5989146868495662e-07}. Best is trial 14 with value: 0.940263.
[0]	test-auc:0.72589	train-auc:0.73348
[1]	test-auc:0.72589	train-auc:0.73348
[2]	test-auc:0.72589	train-auc:0.73348
[3]	test-auc:0.72589	train-auc:0.73348
[4]	test-auc:0.72589	train-auc:0.73348
[I 2022-10-17 04:03:42,811] Trial 20 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.93969	train-auc:0.95147
[1]	test-auc:0.93975	train-auc:0.95147
[2]	test-auc:0.93971	train-auc:0.95147
[3]	test-auc:0.93975	train-auc:0.95148
[4]	test-auc:0.93975	train-auc:0.95147
[5]	test-auc:0.93975	train-auc:0.95148
[6]	test-auc:0.93977	train-auc:0.95149
[7]	test-auc:0.93974	train-auc:0.95147
[8]	test-auc:0.93976	train-auc:0.95147
[9]	test-auc:0.93975	train-auc:0.95147
[I 2022-10-17 04:03:43,886] Trial 21 finished with value: 0.939749 and parameters: {'booster': 'gbtree', 'lambda': 0.0525327770396335, 'alpha': 5.364533950535446e-08, 'max_depth': 9, 'eta': 7.002927403873435e-06, 'gamma': 0.00020415208970674323, 'grow_policy': 'depthwise'}. Best is trial 14 with value: 0.940263.
[0]	test-auc:0.93272	train-auc:0.94132
[1]	test-auc:0.93285	train-auc:0.94143
[2]	test-auc:0.93287	train-auc:0.94146
[3]	test-auc:0.93287	train-auc:0.94146
[4]	test-auc:0.93289	train-auc:0.94146
[5]	test-auc:0.93291	train-auc:0.94147
[6]	test-auc:0.93290	train-auc:0.94147
[7]	test-auc:0.93288	train-auc:0.94147
[8]	test-auc:0.93287	train-auc:0.94146
[9]	test-auc:0.93289	train-auc:0.94146
[I 2022-10-17 04:03:44,857] Trial 22 finished with value: 0.932888 and parameters: {'booster': 'gbtree', 'lambda': 0.08485714178035111, 'alpha': 7.639900722341502e-08, 'max_depth': 8, 'eta': 3.3874203129121655e-06, 'gamma': 0.001050493694660505, 'grow_policy': 'depthwise'}. Best is trial 14 with value: 0.940263.
[0]	test-auc:0.90347	train-auc:0.91228
[1]	test-auc:0.91312	train-auc:0.92158
[2]	test-auc:0.91792	train-auc:0.92728
[3]	test-auc:0.92703	train-auc:0.93502
[4]	test-auc:0.92629	train-auc:0.93496
[5]	test-auc:0.92801	train-auc:0.93648
[6]	test-auc:0.92886	train-auc:0.93696
[7]	test-auc:0.93004	train-auc:0.93831
[8]	test-auc:0.92948	train-auc:0.93779
[9]	test-auc:0.92987	train-auc:0.93835
[I 2022-10-17 04:03:45,795] Trial 23 finished with value: 0.929874 and parameters: {'booster': 'gbtree', 'lambda': 0.006531904179046886, 'alpha': 1.958807185746545e-07, 'max_depth': 8, 'eta': 7.636422207794056e-08, 'gamma': 9.589039175514276e-06, 'grow_policy': 'depthwise'}. Best is trial 14 with value: 0.940263.
[0]	test-auc:0.93967	train-auc:0.95144
[1]	test-auc:0.93970	train-auc:0.95146
[2]	test-auc:0.93974	train-auc:0.95147
[3]	test-auc:0.93976	train-auc:0.95146
[4]	test-auc:0.93973	train-auc:0.95147
[5]	test-auc:0.93977	train-auc:0.95147
[6]	test-auc:0.93975	train-auc:0.95147
[7]	test-auc:0.93975	train-auc:0.95147
[8]	test-auc:0.93975	train-auc:0.95146
[9]	test-auc:0.93975	train-auc:0.95147
[I 2022-10-17 04:03:46,878] Trial 24 finished with value: 0.939751 and parameters: {'booster': 'gbtree', 'lambda': 0.07336761729504898, 'alpha': 1.8771518911509615e-06, 'max_depth': 9, 'eta': 7.62110697051007e-06, 'gamma': 1.808779742225288e-07, 'grow_policy': 'depthwise'}. Best is trial 14 with value: 0.940263.
[0]	test-auc:0.93277	train-auc:0.94151
[1]	test-auc:0.93279	train-auc:0.94151
[2]	test-auc:0.93280	train-auc:0.94158
[3]	test-auc:0.93281	train-auc:0.94159
[4]	test-auc:0.93280	train-auc:0.94159
[5]	test-auc:0.93280	train-auc:0.94159
[6]	test-auc:0.93281	train-auc:0.94160
[7]	test-auc:0.93281	train-auc:0.94161
[8]	test-auc:0.93280	train-auc:0.94161
[9]	test-auc:0.93281	train-auc:0.94161
[I 2022-10-17 04:03:47,874] Trial 25 finished with value: 0.932806 and parameters: {'booster': 'gbtree', 'lambda': 0.0005927227944191411, 'alpha': 2.667397466644078e-06, 'max_depth': 8, 'eta': 0.00040308820670685076, 'gamma': 1.7006573537925114e-07, 'grow_policy': 'depthwise'}. Best is trial 14 with value: 0.940263.
[0]	test-auc:0.88245	train-auc:0.88535
[1]	test-auc:0.90477	train-auc:0.90777
[2]	test-auc:0.90885	train-auc:0.91168
[3]	test-auc:0.91017	train-auc:0.91271
[4]	test-auc:0.91083	train-auc:0.91373
[5]	test-auc:0.91082	train-auc:0.91353
[I 2022-10-17 04:03:48,430] Trial 26 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.93984	train-auc:0.95139
[1]	test-auc:0.93982	train-auc:0.95139
[2]	test-auc:0.93983	train-auc:0.95140
[3]	test-auc:0.93985	train-auc:0.95139
[4]	test-auc:0.93983	train-auc:0.95140
[5]	test-auc:0.93983	train-auc:0.95140
[6]	test-auc:0.93983	train-auc:0.95141
[7]	test-auc:0.93983	train-auc:0.95140
[8]	test-auc:0.93983	train-auc:0.95140
[9]	test-auc:0.93984	train-auc:0.95140
[I 2022-10-17 04:03:50,787] Trial 27 finished with value: 0.939843 and parameters: {'booster': 'dart', 'lambda': 0.16489479683051558, 'alpha': 1.683061022247888e-05, 'max_depth': 9, 'eta': 2.155168889156368e-05, 'gamma': 3.819677035950188e-06, 'grow_policy': 'lossguide', 'sample_type': 'weighted', 'normalize_type': 'tree', 'rate_drop': 0.007944945548142323, 'skip_drop': 6.737402440985577e-06}. Best is trial 14 with value: 0.940263.
[0]	test-auc:0.91286	train-auc:0.91546
[1]	test-auc:0.91313	train-auc:0.91565
[2]	test-auc:0.91397	train-auc:0.91642
[3]	test-auc:0.91401	train-auc:0.91670
[4]	test-auc:0.91519	train-auc:0.91793
[I 2022-10-17 04:03:51,641] Trial 28 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.93289	train-auc:0.94146
[1]	test-auc:0.93290	train-auc:0.94147
[2]	test-auc:0.93290	train-auc:0.94147
[3]	test-auc:0.93292	train-auc:0.94153
[4]	test-auc:0.93291	train-auc:0.94154
[5]	test-auc:0.93290	train-auc:0.94154
[6]	test-auc:0.93290	train-auc:0.94155
[7]	test-auc:0.93290	train-auc:0.94155
[8]	test-auc:0.93290	train-auc:0.94155
[9]	test-auc:0.93291	train-auc:0.94156
[I 2022-10-17 04:03:53,504] Trial 29 finished with value: 0.932909 and parameters: {'booster': 'dart', 'lambda': 0.19600871976590678, 'alpha': 8.75042705965533e-06, 'max_depth': 8, 'eta': 0.00040830042860170394, 'gamma': 4.433512138529205e-05, 'grow_policy': 'lossguide', 'sample_type': 'weighted', 'normalize_type': 'tree', 'rate_drop': 0.0024986709745782156, 'skip_drop': 3.993877516462538e-06}. Best is trial 14 with value: 0.940263.
[0]	test-auc:0.92324	train-auc:0.92894
[1]	test-auc:0.92325	train-auc:0.92896
[2]	test-auc:0.92327	train-auc:0.92895
[3]	test-auc:0.92325	train-auc:0.92895
[4]	test-auc:0.92326	train-auc:0.92895
[I 2022-10-17 04:03:54,462] Trial 30 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.93969	train-auc:0.95136
[1]	test-auc:0.93974	train-auc:0.95141
[2]	test-auc:0.93974	train-auc:0.95140
[3]	test-auc:0.93976	train-auc:0.95141
[4]	test-auc:0.93972	train-auc:0.95140
[5]	test-auc:0.93974	train-auc:0.95141
[6]	test-auc:0.93975	train-auc:0.95141
[7]	test-auc:0.93974	train-auc:0.95141
[8]	test-auc:0.93974	train-auc:0.95141
[9]	test-auc:0.93974	train-auc:0.95140
[I 2022-10-17 04:03:56,834] Trial 31 finished with value: 0.939739 and parameters: {'booster': 'dart', 'lambda': 0.12934099901533413, 'alpha': 6.576125696798712e-06, 'max_depth': 9, 'eta': 8.384034707188935e-06, 'gamma': 2.0355954134549143e-07, 'grow_policy': 'lossguide', 'sample_type': 'uniform', 'normalize_type': 'forest', 'rate_drop': 1.435537255379489e-06, 'skip_drop': 7.68188544825455e-06}. Best is trial 14 with value: 0.940263.
[0]	test-auc:0.93931	train-auc:0.95118
[1]	test-auc:0.93944	train-auc:0.95137
[2]	test-auc:0.93967	train-auc:0.95145
[3]	test-auc:0.93966	train-auc:0.95144
[4]	test-auc:0.93983	train-auc:0.95150
[5]	test-auc:0.93978	train-auc:0.95149
[6]	test-auc:0.93973	train-auc:0.95148
[7]	test-auc:0.93980	train-auc:0.95149
[8]	test-auc:0.93979	train-auc:0.95151
[9]	test-auc:0.93980	train-auc:0.95149
[I 2022-10-17 04:03:58,909] Trial 32 finished with value: 0.939801 and parameters: {'booster': 'gbtree', 'lambda': 0.04520528710055034, 'alpha': 7.177872151809088e-05, 'max_depth': 9, 'eta': 1.4142140116172675e-06, 'gamma': 5.529404060137497e-07, 'grow_policy': 'lossguide'}. Best is trial 14 with value: 0.940263.
[0]	test-auc:0.93125	train-auc:0.94003
[1]	test-auc:0.93228	train-auc:0.94097
[2]	test-auc:0.93247	train-auc:0.94120
[3]	test-auc:0.93257	train-auc:0.94134
[4]	test-auc:0.93250	train-auc:0.94135
[I 2022-10-17 04:04:00,177] Trial 33 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.93976	train-auc:0.95151
[1]	test-auc:0.93976	train-auc:0.95152
[2]	test-auc:0.93976	train-auc:0.95152
[3]	test-auc:0.93977	train-auc:0.95152
[4]	test-auc:0.93976	train-auc:0.95152
[5]	test-auc:0.93977	train-auc:0.95152
[6]	test-auc:0.93976	train-auc:0.95152
[7]	test-auc:0.93975	train-auc:0.95153
[8]	test-auc:0.93976	train-auc:0.95152
[9]	test-auc:0.93975	train-auc:0.95153
[I 2022-10-17 04:04:02,335] Trial 34 finished with value: 0.939754 and parameters: {'booster': 'gbtree', 'lambda': 0.004773235170682679, 'alpha': 0.00012736814837946173, 'max_depth': 9, 'eta': 3.202172128305507e-05, 'gamma': 9.432226692954273e-06, 'grow_policy': 'lossguide'}. Best is trial 14 with value: 0.940263.
[0]	test-auc:0.92559	train-auc:0.93448
[1]	test-auc:0.92975	train-auc:0.93879
[2]	test-auc:0.93071	train-auc:0.93982
[3]	test-auc:0.93174	train-auc:0.94053
[4]	test-auc:0.93153	train-auc:0.94041
[I 2022-10-17 04:04:03,323] Trial 35 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.93975	train-auc:0.95151
[1]	test-auc:0.93975	train-auc:0.95153
[2]	test-auc:0.93976	train-auc:0.95153
[3]	test-auc:0.93977	train-auc:0.95153
[4]	test-auc:0.93975	train-auc:0.95153
[5]	test-auc:0.93975	train-auc:0.95153
[6]	test-auc:0.93975	train-auc:0.95153
[7]	test-auc:0.93975	train-auc:0.95153
[8]	test-auc:0.93977	train-auc:0.95153
[9]	test-auc:0.93978	train-auc:0.95156
[I 2022-10-17 04:04:05,703] Trial 36 finished with value: 0.939783 and parameters: {'booster': 'dart', 'lambda': 0.0008249585801225136, 'alpha': 1.511902352009192e-07, 'max_depth': 9, 'eta': 0.00010040628728912814, 'gamma': 4.533926383482108e-05, 'grow_policy': 'lossguide', 'sample_type': 'uniform', 'normalize_type': 'tree', 'rate_drop': 0.0002443449274821486, 'skip_drop': 3.468722039193922e-05}. Best is trial 14 with value: 0.940263.
[0]	test-auc:0.79251	train-auc:0.79475
[1]	test-auc:0.79473	train-auc:0.79695
[2]	test-auc:0.79599	train-auc:0.79817
[3]	test-auc:0.79635	train-auc:0.79856
[4]	test-auc:0.79649	train-auc:0.79868
[5]	test-auc:0.79647	train-auc:0.79865
[I 2022-10-17 04:04:06,163] Trial 37 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.93969	train-auc:0.95043
[1]	test-auc:0.93972	train-auc:0.95047
[2]	test-auc:0.93975	train-auc:0.95047
[3]	test-auc:0.93976	train-auc:0.95047
[4]	test-auc:0.93976	train-auc:0.95048
[5]	test-auc:0.93976	train-auc:0.95048
[6]	test-auc:0.93977	train-auc:0.95048
[7]	test-auc:0.93976	train-auc:0.95048
[8]	test-auc:0.93977	train-auc:0.95048
[9]	test-auc:0.93977	train-auc:0.95048
[I 2022-10-17 04:04:08,077] Trial 38 finished with value: 0.939773 and parameters: {'booster': 'gbtree', 'lambda': 0.00018036778201170755, 'alpha': 0.6397104963085858, 'max_depth': 9, 'eta': 0.00018996151926583518, 'gamma': 3.698241116862173e-06, 'grow_policy': 'lossguide'}. Best is trial 14 with value: 0.940263.
[0]	test-auc:0.79569	train-auc:0.79765
[1]	test-auc:0.80775	train-auc:0.80896
[2]	test-auc:0.82375	train-auc:0.82271
[3]	test-auc:0.83058	train-auc:0.82886
[4]	test-auc:0.83490	train-auc:0.83305
[5]	test-auc:0.83750	train-auc:0.83577
[I 2022-10-17 04:04:08,559] Trial 39 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.89814	train-auc:0.89880
[1]	test-auc:0.89817	train-auc:0.89907
[2]	test-auc:0.89870	train-auc:0.89940
[3]	test-auc:0.89942	train-auc:0.90007
[4]	test-auc:0.90107	train-auc:0.90163
[5]	test-auc:0.90144	train-auc:0.90220
[I 2022-10-17 04:04:09,347] Trial 40 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.93974	train-auc:0.95150
[1]	test-auc:0.93974	train-auc:0.95152
[2]	test-auc:0.93976	train-auc:0.95153
[3]	test-auc:0.93976	train-auc:0.95153
[4]	test-auc:0.93975	train-auc:0.95153
[5]	test-auc:0.93976	train-auc:0.95153
[6]	test-auc:0.93975	train-auc:0.95153
[7]	test-auc:0.93974	train-auc:0.95153
[8]	test-auc:0.93975	train-auc:0.95153
[9]	test-auc:0.93975	train-auc:0.95153
[I 2022-10-17 04:04:11,730] Trial 41 finished with value: 0.939753 and parameters: {'booster': 'dart', 'lambda': 0.0011954021432775787, 'alpha': 1.4719039228443147e-07, 'max_depth': 9, 'eta': 5.112471441616327e-05, 'gamma': 4.5071954720659635e-05, 'grow_policy': 'lossguide', 'sample_type': 'uniform', 'normalize_type': 'tree', 'rate_drop': 0.00021640144049665553, 'skip_drop': 5.3727021258412615e-05}. Best is trial 14 with value: 0.940263.
[0]	test-auc:0.93938	train-auc:0.95127
[1]	test-auc:0.93971	train-auc:0.95148
[2]	test-auc:0.93974	train-auc:0.95149
[3]	test-auc:0.93974	train-auc:0.95149
[4]	test-auc:0.93976	train-auc:0.95151
[5]	test-auc:0.93975	train-auc:0.95150
[6]	test-auc:0.93973	train-auc:0.95150
[7]	test-auc:0.93977	train-auc:0.95151
[8]	test-auc:0.93978	train-auc:0.95151
[9]	test-auc:0.93978	train-auc:0.95151
[I 2022-10-17 04:04:14,136] Trial 42 finished with value: 0.939784 and parameters: {'booster': 'dart', 'lambda': 0.015033711647954988, 'alpha': 5.428834492999296e-07, 'max_depth': 9, 'eta': 2.5549324901511333e-06, 'gamma': 5.5743765107436864e-05, 'grow_policy': 'lossguide', 'sample_type': 'uniform', 'normalize_type': 'tree', 'rate_drop': 0.00039899189604444536, 'skip_drop': 5.900512877268496e-05}. Best is trial 14 with value: 0.940263.
[0]	test-auc:0.93244	train-auc:0.94127
[1]	test-auc:0.93289	train-auc:0.94150
[2]	test-auc:0.93288	train-auc:0.94156
[3]	test-auc:0.93282	train-auc:0.94154
[4]	test-auc:0.93280	train-auc:0.94154
[5]	test-auc:0.93281	train-auc:0.94154
[I 2022-10-17 04:04:15,280] Trial 43 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.93341	train-auc:0.94568
[1]	test-auc:0.93716	train-auc:0.94896
[2]	test-auc:0.93774	train-auc:0.94971
[3]	test-auc:0.93832	train-auc:0.95017
[4]	test-auc:0.93837	train-auc:0.95024
[5]	test-auc:0.93835	train-auc:0.95036
[6]	test-auc:0.93866	train-auc:0.95052
[7]	test-auc:0.93862	train-auc:0.95053
[8]	test-auc:0.93847	train-auc:0.95047
[9]	test-auc:0.93860	train-auc:0.95051
[I 2022-10-17 04:04:17,584] Trial 44 finished with value: 0.938604 and parameters: {'booster': 'dart', 'lambda': 0.011140358977726379, 'alpha': 3.792427401067707e-07, 'max_depth': 9, 'eta': 3.344481166401654e-07, 'gamma': 0.0004558772731365434, 'grow_policy': 'lossguide', 'sample_type': 'uniform', 'normalize_type': 'tree', 'rate_drop': 1.0217263694103925e-05, 'skip_drop': 7.120186832018198e-07}. Best is trial 14 with value: 0.940263.
[0]	test-auc:0.79577	train-auc:0.79777
[1]	test-auc:0.80674	train-auc:0.80808
[2]	test-auc:0.81872	train-auc:0.81867
[3]	test-auc:0.82415	train-auc:0.82358
[4]	test-auc:0.82714	train-auc:0.82646
[5]	test-auc:0.82871	train-auc:0.82810
[I 2022-10-17 04:04:18,048] Trial 45 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.93972	train-auc:0.95137
[1]	test-auc:0.93976	train-auc:0.95141
[2]	test-auc:0.93975	train-auc:0.95140
[3]	test-auc:0.93975	train-auc:0.95141
[4]	test-auc:0.93973	train-auc:0.95141
[5]	test-auc:0.93974	train-auc:0.95141
[6]	test-auc:0.93974	train-auc:0.95141
[7]	test-auc:0.93974	train-auc:0.95141
[8]	test-auc:0.93975	train-auc:0.95140
[9]	test-auc:0.93975	train-auc:0.95141
[I 2022-10-17 04:04:20,110] Trial 46 finished with value: 0.939746 and parameters: {'booster': 'gbtree', 'lambda': 0.13309627689758932, 'alpha': 0.0028945006214318803, 'max_depth': 9, 'eta': 1.9356034099138127e-05, 'gamma': 0.011172766522409294, 'grow_policy': 'lossguide'}. Best is trial 14 with value: 0.940263.
[0]	test-auc:0.92318	train-auc:0.92884
[1]	test-auc:0.92330	train-auc:0.92890
[2]	test-auc:0.92328	train-auc:0.92890
[3]	test-auc:0.92329	train-auc:0.92891
[4]	test-auc:0.92333	train-auc:0.92892
[5]	test-auc:0.92330	train-auc:0.92894
[I 2022-10-17 04:04:20,866] Trial 47 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.93290	train-auc:0.94142
[1]	test-auc:0.93291	train-auc:0.94143
[2]	test-auc:0.93291	train-auc:0.94143
[3]	test-auc:0.93291	train-auc:0.94143
[4]	test-auc:0.93292	train-auc:0.94143
[I 2022-10-17 04:04:21,993] Trial 48 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.83375	train-auc:0.85010
[1]	test-auc:0.89129	train-auc:0.90501
[2]	test-auc:0.89180	train-auc:0.90510
[3]	test-auc:0.92489	train-auc:0.93619
[4]	test-auc:0.90432	train-auc:0.91770
[I 2022-10-17 04:04:23,207] Trial 49 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.84525	train-auc:0.84458
[1]	test-auc:0.84525	train-auc:0.84459
[2]	test-auc:0.84525	train-auc:0.84459
[3]	test-auc:0.84525	train-auc:0.84459
[4]	test-auc:0.84525	train-auc:0.84459
[5]	test-auc:0.84539	train-auc:0.84471
[I 2022-10-17 04:04:23,914] Trial 50 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.93974	train-auc:0.95151
[1]	test-auc:0.93974	train-auc:0.95153
[2]	test-auc:0.93976	train-auc:0.95153
[3]	test-auc:0.93976	train-auc:0.95153
[4]	test-auc:0.93975	train-auc:0.95153
[5]	test-auc:0.93975	train-auc:0.95153
[6]	test-auc:0.93977	train-auc:0.95153
[7]	test-auc:0.93978	train-auc:0.95156
[8]	test-auc:0.93979	train-auc:0.95157
[9]	test-auc:0.93980	train-auc:0.95159
[I 2022-10-17 04:04:26,315] Trial 51 finished with value: 0.939804 and parameters: {'booster': 'dart', 'lambda': 0.0008827500652264777, 'alpha': 5.752582526235288e-07, 'max_depth': 9, 'eta': 0.00013680965249878729, 'gamma': 0.00010965997412169428, 'grow_policy': 'lossguide', 'sample_type': 'uniform', 'normalize_type': 'tree', 'rate_drop': 0.00021769317317595703, 'skip_drop': 2.9784833298843077e-05}. Best is trial 14 with value: 0.940263.
[0]	test-auc:0.50000	train-auc:0.50000
[1]	test-auc:0.50000	train-auc:0.50000
[2]	test-auc:0.50000	train-auc:0.50000
[3]	test-auc:0.50000	train-auc:0.50000
[4]	test-auc:0.50000	train-auc:0.50000
[I 2022-10-17 04:04:27,650] Trial 52 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.93278	train-auc:0.94151
[1]	test-auc:0.93278	train-auc:0.94152
[2]	test-auc:0.93280	train-auc:0.94158
[3]	test-auc:0.93281	train-auc:0.94159
[4]	test-auc:0.93280	train-auc:0.94159
[I 2022-10-17 04:04:28,763] Trial 53 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.93973	train-auc:0.95151
[1]	test-auc:0.93976	train-auc:0.95153
[2]	test-auc:0.93975	train-auc:0.95153
[3]	test-auc:0.93976	train-auc:0.95153
[4]	test-auc:0.93976	train-auc:0.95153
[5]	test-auc:0.93975	train-auc:0.95153
[6]	test-auc:0.93975	train-auc:0.95154
[7]	test-auc:0.93975	train-auc:0.95153
[8]	test-auc:0.93976	train-auc:0.95154
[9]	test-auc:0.93975	train-auc:0.95153
[I 2022-10-17 04:04:31,144] Trial 54 finished with value: 0.939754 and parameters: {'booster': 'dart', 'lambda': 0.0002838842943818288, 'alpha': 2.501379747039291e-07, 'max_depth': 9, 'eta': 1.2543542748713658e-05, 'gamma': 7.18862632828114e-07, 'grow_policy': 'lossguide', 'sample_type': 'uniform', 'normalize_type': 'tree', 'rate_drop': 1.757404525177173e-07, 'skip_drop': 5.7341514297902455e-05}. Best is trial 14 with value: 0.940263.
[0]	test-auc:0.93956	train-auc:0.95138
[1]	test-auc:0.93972	train-auc:0.95147
[2]	test-auc:0.93975	train-auc:0.95148
[3]	test-auc:0.93979	train-auc:0.95153
[4]	test-auc:0.93978	train-auc:0.95151
[5]	test-auc:0.93977	train-auc:0.95150
[6]	test-auc:0.93976	train-auc:0.95149
[7]	test-auc:0.93978	train-auc:0.95149
[8]	test-auc:0.93975	train-auc:0.95149
[9]	test-auc:0.93975	train-auc:0.95150
[I 2022-10-17 04:04:33,522] Trial 55 finished with value: 0.939752 and parameters: {'booster': 'dart', 'lambda': 0.06418830370076142, 'alpha': 1.1556755885657088e-05, 'max_depth': 9, 'eta': 3.0193605846931012e-06, 'gamma': 1.87293279443324e-05, 'grow_policy': 'lossguide', 'sample_type': 'uniform', 'normalize_type': 'tree', 'rate_drop': 8.191541228198422e-05, 'skip_drop': 1.3275531426502823e-06}. Best is trial 14 with value: 0.940263.
[0]	test-auc:0.92336	train-auc:0.92904
[1]	test-auc:0.92485	train-auc:0.93021
[2]	test-auc:0.92730	train-auc:0.93215
[3]	test-auc:0.92804	train-auc:0.93292
[4]	test-auc:0.92953	train-auc:0.93443
[I 2022-10-17 04:04:34,328] Trial 56 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.79570	train-auc:0.79771
[1]	test-auc:0.80721	train-auc:0.80850
[2]	test-auc:0.82096	train-auc:0.82051
[3]	test-auc:0.82690	train-auc:0.82588
[4]	test-auc:0.83027	train-auc:0.82916
[5]	test-auc:0.83211	train-auc:0.83108
[I 2022-10-17 04:04:34,800] Trial 57 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.93224	train-auc:0.94053
[1]	test-auc:0.93286	train-auc:0.94118
[2]	test-auc:0.93291	train-auc:0.94137
[3]	test-auc:0.93310	train-auc:0.94140
[4]	test-auc:0.93311	train-auc:0.94141
[5]	test-auc:0.93312	train-auc:0.94148
[I 2022-10-17 04:04:35,971] Trial 58 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.93975	train-auc:0.95151
[1]	test-auc:0.93982	train-auc:0.95159
[2]	test-auc:0.93985	train-auc:0.95160
[3]	test-auc:0.93986	train-auc:0.95161
[4]	test-auc:0.93985	train-auc:0.95162
[5]	test-auc:0.93987	train-auc:0.95162
[6]	test-auc:0.93993	train-auc:0.95179
[7]	test-auc:0.94006	train-auc:0.95189
[8]	test-auc:0.94006	train-auc:0.95189
[9]	test-auc:0.94018	train-auc:0.95195
[I 2022-10-17 04:04:37,068] Trial 59 finished with value: 0.940182 and parameters: {'booster': 'gbtree', 'lambda': 2.650237911635235e-05, 'alpha': 3.567297726980018e-05, 'max_depth': 9, 'eta': 0.0013271442787473732, 'gamma': 7.278154817995061e-06, 'grow_policy': 'depthwise'}. Best is trial 14 with value: 0.940263.
[0]	test-auc:0.93277	train-auc:0.94151
[1]	test-auc:0.93280	train-auc:0.94159
[2]	test-auc:0.93280	train-auc:0.94160
[3]	test-auc:0.93281	train-auc:0.94161
[4]	test-auc:0.93280	train-auc:0.94162
[I 2022-10-17 04:04:37,687] Trial 60 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.93975	train-auc:0.95151
[1]	test-auc:0.93984	train-auc:0.95161
[2]	test-auc:0.94004	train-auc:0.95191
[3]	test-auc:0.94018	train-auc:0.95199
[4]	test-auc:0.94022	train-auc:0.95209
[5]	test-auc:0.94024	train-auc:0.95217
[6]	test-auc:0.94028	train-auc:0.95221
[7]	test-auc:0.94031	train-auc:0.95223
[8]	test-auc:0.94040	train-auc:0.95229
[9]	test-auc:0.94043	train-auc:0.95234
[I 2022-10-17 04:04:38,765] Trial 61 finished with value: 0.94043 and parameters: {'booster': 'gbtree', 'lambda': 4.2566130793524976e-06, 'alpha': 3.0959184761572737e-06, 'max_depth': 9, 'eta': 0.004361023168808038, 'gamma': 7.167972095047094e-06, 'grow_policy': 'depthwise'}. Best is trial 61 with value: 0.94043.
[0]	test-auc:0.93976	train-auc:0.95152
[1]	test-auc:0.93985	train-auc:0.95162
[2]	test-auc:0.94022	train-auc:0.95197
[3]	test-auc:0.94025	train-auc:0.95208
[4]	test-auc:0.94026	train-auc:0.95214
[5]	test-auc:0.94024	train-auc:0.95218
[6]	test-auc:0.94030	train-auc:0.95225
[7]	test-auc:0.94049	train-auc:0.95240
[8]	test-auc:0.94059	train-auc:0.95249
[9]	test-auc:0.94072	train-auc:0.95256
[I 2022-10-17 04:04:39,850] Trial 62 finished with value: 0.940721 and parameters: {'booster': 'gbtree', 'lambda': 2.9582340402824e-06, 'alpha': 1.3541151277620548e-06, 'max_depth': 9, 'eta': 0.005538361153641223, 'gamma': 6.334800297435841e-06, 'grow_policy': 'depthwise'}. Best is trial 62 with value: 0.940721.
[0]	test-auc:0.93976	train-auc:0.95151
[1]	test-auc:0.93985	train-auc:0.95162
[2]	test-auc:0.94022	train-auc:0.95197
[3]	test-auc:0.94023	train-auc:0.95205
[4]	test-auc:0.94026	train-auc:0.95214
[5]	test-auc:0.94024	train-auc:0.95217
[6]	test-auc:0.94036	train-auc:0.95228
[7]	test-auc:0.94041	train-auc:0.95232
[8]	test-auc:0.94049	train-auc:0.95237
[9]	test-auc:0.94059	train-auc:0.95249
[I 2022-10-17 04:04:40,949] Trial 63 finished with value: 0.940592 and parameters: {'booster': 'gbtree', 'lambda': 2.767197423888885e-06, 'alpha': 3.958936944687295e-06, 'max_depth': 9, 'eta': 0.005288444705427519, 'gamma': 6.867404893373784e-06, 'grow_policy': 'depthwise'}. Best is trial 62 with value: 0.940721.
[0]	test-auc:0.93976	train-auc:0.95151
[1]	test-auc:0.93984	train-auc:0.95162
[2]	test-auc:0.94019	train-auc:0.95197
[3]	test-auc:0.94019	train-auc:0.95200
[4]	test-auc:0.94025	train-auc:0.95212
[5]	test-auc:0.94025	train-auc:0.95219
[6]	test-auc:0.94029	train-auc:0.95222
[7]	test-auc:0.94036	train-auc:0.95227
[8]	test-auc:0.94042	train-auc:0.95231
[9]	test-auc:0.94050	train-auc:0.95239
[I 2022-10-17 04:04:42,048] Trial 64 finished with value: 0.940504 and parameters: {'booster': 'gbtree', 'lambda': 3.4409257053864875e-06, 'alpha': 3.6137031211525634e-06, 'max_depth': 9, 'eta': 0.004635421285055409, 'gamma': 9.761393771761886e-06, 'grow_policy': 'depthwise'}. Best is trial 62 with value: 0.940721.
[0]	test-auc:0.93277	train-auc:0.94151
[1]	test-auc:0.93279	train-auc:0.94160
[2]	test-auc:0.93282	train-auc:0.94163
[3]	test-auc:0.93334	train-auc:0.94196
[4]	test-auc:0.93345	train-auc:0.94208
[I 2022-10-17 04:04:42,681] Trial 65 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.93977	train-auc:0.95152
[1]	test-auc:0.93998	train-auc:0.95193
[2]	test-auc:0.94018	train-auc:0.95209
[3]	test-auc:0.94016	train-auc:0.95215
[4]	test-auc:0.94042	train-auc:0.95238
[5]	test-auc:0.94059	train-auc:0.95247
[6]	test-auc:0.94080	train-auc:0.95260
[7]	test-auc:0.94124	train-auc:0.95311
[8]	test-auc:0.94172	train-auc:0.95360
[9]	test-auc:0.94223	train-auc:0.95416
[I 2022-10-17 04:04:43,808] Trial 66 finished with value: 0.942226 and parameters: {'booster': 'gbtree', 'lambda': 6.221021723531662e-06, 'alpha': 7.573921226795138e-06, 'max_depth': 9, 'eta': 0.008377032722049446, 'gamma': 1.9440021252761917e-05, 'grow_policy': 'depthwise'}. Best is trial 66 with value: 0.942226.
[0]	test-auc:0.72589	train-auc:0.73348
[1]	test-auc:0.72589	train-auc:0.73348
[2]	test-auc:0.72589	train-auc:0.73348
[3]	test-auc:0.72589	train-auc:0.73348
[4]	test-auc:0.72605	train-auc:0.73356
[5]	test-auc:0.72605	train-auc:0.73356
[I 2022-10-17 04:04:44,325] Trial 67 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.87777	train-auc:0.87879
[1]	test-auc:0.87954	train-auc:0.88031
[2]	test-auc:0.88165	train-auc:0.88248
[3]	test-auc:0.88262	train-auc:0.88321
[4]	test-auc:0.88337	train-auc:0.88381
[I 2022-10-17 04:04:44,886] Trial 68 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.93993	train-auc:0.95170
[1]	test-auc:0.94419	train-auc:0.95522
[2]	test-auc:0.94594	train-auc:0.95727
[3]	test-auc:0.94849	train-auc:0.95944
[4]	test-auc:0.94990	train-auc:0.96060
[5]	test-auc:0.95129	train-auc:0.96176
[6]	test-auc:0.95199	train-auc:0.96257
[7]	test-auc:0.95439	train-auc:0.96461
[8]	test-auc:0.95578	train-auc:0.96590
[9]	test-auc:0.95680	train-auc:0.96709
[I 2022-10-17 04:04:46,034] Trial 69 finished with value: 0.956796 and parameters: {'booster': 'gbtree', 'lambda': 2.736864321604842e-05, 'alpha': 5.8143168424613715e-06, 'max_depth': 9, 'eta': 0.11374301887322387, 'gamma': 2.7920896541644113e-05, 'grow_policy': 'depthwise'}. Best is trial 69 with value: 0.956796.
[0]	test-auc:0.93339	train-auc:0.94187
[1]	test-auc:0.93945	train-auc:0.94773
[2]	test-auc:0.94295	train-auc:0.95089
[3]	test-auc:0.94640	train-auc:0.95429
[4]	test-auc:0.94958	train-auc:0.95750
[5]	test-auc:0.95195	train-auc:0.96027
[6]	test-auc:0.95351	train-auc:0.96171
[7]	test-auc:0.95500	train-auc:0.96347
[8]	test-auc:0.95633	train-auc:0.96488
[9]	test-auc:0.95709	train-auc:0.96625
[I 2022-10-17 04:04:47,022] Trial 70 finished with value: 0.957088 and parameters: {'booster': 'gbtree', 'lambda': 7.027310945604398e-06, 'alpha': 1.086205234163897e-05, 'max_depth': 8, 'eta': 0.24953894569165339, 'gamma': 2.6672043490196943e-05, 'grow_policy': 'depthwise'}. Best is trial 70 with value: 0.957088.
[0]	test-auc:0.93342	train-auc:0.94200
[1]	test-auc:0.94515	train-auc:0.95315
[2]	test-auc:0.95079	train-auc:0.95930
[3]	test-auc:0.95415	train-auc:0.96274
[4]	test-auc:0.95581	train-auc:0.96527
[5]	test-auc:0.95797	train-auc:0.96795
[6]	test-auc:0.95926	train-auc:0.97040
[7]	test-auc:0.96016	train-auc:0.97198
[8]	test-auc:0.96187	train-auc:0.97438
[9]	test-auc:0.96299	train-auc:0.97597
[I 2022-10-17 04:04:47,994] Trial 71 finished with value: 0.962987 and parameters: {'booster': 'gbtree', 'lambda': 7.1796112503983475e-06, 'alpha': 9.67597576541745e-06, 'max_depth': 8, 'eta': 0.562522669088554, 'gamma': 1.621994071582901e-05, 'grow_policy': 'depthwise'}. Best is trial 71 with value: 0.962987.
[0]	test-auc:0.93343	train-auc:0.94202
[1]	test-auc:0.94225	train-auc:0.95094
[2]	test-auc:0.94864	train-auc:0.95791
[3]	test-auc:0.95144	train-auc:0.96124
[4]	test-auc:0.95387	train-auc:0.96459
[5]	test-auc:0.95578	train-auc:0.96709
[6]	test-auc:0.95666	train-auc:0.96872
[7]	test-auc:0.95792	train-auc:0.97040
[8]	test-auc:0.95969	train-auc:0.97240
[9]	test-auc:0.96105	train-auc:0.97410
[I 2022-10-17 04:04:48,981] Trial 72 finished with value: 0.961053 and parameters: {'booster': 'gbtree', 'lambda': 7.812499420087512e-06, 'alpha': 6.7323195035956e-06, 'max_depth': 8, 'eta': 0.4850113236878156, 'gamma': 1.8451016629008165e-05, 'grow_policy': 'depthwise'}. Best is trial 71 with value: 0.962987.
[0]	test-auc:0.92372	train-auc:0.92937
[1]	test-auc:0.93387	train-auc:0.93823
[2]	test-auc:0.93982	train-auc:0.94501
[3]	test-auc:0.94344	train-auc:0.94929
[4]	test-auc:0.94684	train-auc:0.95313
[5]	test-auc:0.94867	train-auc:0.95548
[6]	test-auc:0.94966	train-auc:0.95724
[7]	test-auc:0.95137	train-auc:0.95952
[8]	test-auc:0.95216	train-auc:0.96085
[9]	test-auc:0.95311	train-auc:0.96224
[I 2022-10-17 04:04:49,944] Trial 73 finished with value: 0.953111 and parameters: {'booster': 'gbtree', 'lambda': 6.88932712693673e-06, 'alpha': 9.828763215235661e-06, 'max_depth': 7, 'eta': 0.4154975527111718, 'gamma': 3.371482518144579e-05, 'grow_policy': 'depthwise'}. Best is trial 71 with value: 0.962987.
[0]	test-auc:0.92345	train-auc:0.92919
[1]	test-auc:0.93712	train-auc:0.94477
[2]	test-auc:0.94481	train-auc:0.95332
[3]	test-auc:0.94901	train-auc:0.95833
[4]	test-auc:0.95132	train-auc:0.96239
[5]	test-auc:0.95513	train-auc:0.96622
[6]	test-auc:0.95721	train-auc:0.96929
[7]	test-auc:0.95986	train-auc:0.97292
[8]	test-auc:0.96191	train-auc:0.97546
[9]	test-auc:0.96266	train-auc:0.97665
[I 2022-10-17 04:04:50,923] Trial 74 finished with value: 0.962661 and parameters: {'booster': 'gbtree', 'lambda': 8.773819030186254e-06, 'alpha': 9.332648897902855e-06, 'max_depth': 7, 'eta': 0.9801447927216344, 'gamma': 2.7168147417372703e-05, 'grow_policy': 'depthwise'}. Best is trial 71 with value: 0.962987.
[0]	test-auc:0.92374	train-auc:0.92938
[1]	test-auc:0.93775	train-auc:0.94348
[2]	test-auc:0.94461	train-auc:0.95084
[3]	test-auc:0.94820	train-auc:0.95585
[4]	test-auc:0.94990	train-auc:0.95872
[5]	test-auc:0.95288	train-auc:0.96252
[6]	test-auc:0.95470	train-auc:0.96486
[7]	test-auc:0.95666	train-auc:0.96790
[8]	test-auc:0.95847	train-auc:0.97022
[9]	test-auc:0.95988	train-auc:0.97218
[I 2022-10-17 04:04:51,879] Trial 75 finished with value: 0.959879 and parameters: {'booster': 'gbtree', 'lambda': 8.055888394295522e-06, 'alpha': 6.836799663090652e-06, 'max_depth': 7, 'eta': 0.7218110282367511, 'gamma': 3.4113615732678796e-05, 'grow_policy': 'depthwise'}. Best is trial 71 with value: 0.962987.
[0]	test-auc:0.91353	train-auc:0.91614
[1]	test-auc:0.92840	train-auc:0.93317
[2]	test-auc:0.93555	train-auc:0.94132
[3]	test-auc:0.94075	train-auc:0.94593
[4]	test-auc:0.94630	train-auc:0.95212
[5]	test-auc:0.94865	train-auc:0.95551
[6]	test-auc:0.95123	train-auc:0.95924
[7]	test-auc:0.95375	train-auc:0.96224
[8]	test-auc:0.95493	train-auc:0.96399
[9]	test-auc:0.95750	train-auc:0.96715
[I 2022-10-17 04:04:52,796] Trial 76 finished with value: 0.957502 and parameters: {'booster': 'gbtree', 'lambda': 8.010277247369474e-06, 'alpha': 7.74418285104155e-06, 'max_depth': 6, 'eta': 0.9692145024386951, 'gamma': 3.2524782493101404e-05, 'grow_policy': 'depthwise'}. Best is trial 71 with value: 0.962987.
[0]	test-auc:0.91356	train-auc:0.91616
[1]	test-auc:0.92856	train-auc:0.93326
[2]	test-auc:0.93571	train-auc:0.94024
[3]	test-auc:0.94385	train-auc:0.94864
[4]	test-auc:0.94705	train-auc:0.95365
[5]	test-auc:0.94951	train-auc:0.95747
[6]	test-auc:0.95260	train-auc:0.96096
[7]	test-auc:0.95386	train-auc:0.96290
[8]	test-auc:0.95529	train-auc:0.96490
[9]	test-auc:0.95698	train-auc:0.96683
[I 2022-10-17 04:04:53,720] Trial 77 finished with value: 0.956975 and parameters: {'booster': 'gbtree', 'lambda': 3.219274564053079e-05, 'alpha': 1.0213666791819722e-05, 'max_depth': 6, 'eta': 0.9372263622163269, 'gamma': 0.0001864522031193081, 'grow_policy': 'depthwise'}. Best is trial 71 with value: 0.962987.
[0]	test-auc:0.91333	train-auc:0.91580
[1]	test-auc:0.91999	train-auc:0.92275
[2]	test-auc:0.92253	train-auc:0.92547
[3]	test-auc:0.92591	train-auc:0.92890
[4]	test-auc:0.92902	train-auc:0.93248
[5]	test-auc:0.93196	train-auc:0.93496
[I 2022-10-17 04:04:54,324] Trial 78 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.89923	train-auc:0.89981
[1]	test-auc:0.91570	train-auc:0.91774
[2]	test-auc:0.92424	train-auc:0.92706
[3]	test-auc:0.92819	train-auc:0.93159
[4]	test-auc:0.93529	train-auc:0.93811
[5]	test-auc:0.93952	train-auc:0.94273
[I 2022-10-17 04:04:54,913] Trial 79 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.91330	train-auc:0.91576
[1]	test-auc:0.91710	train-auc:0.92022
[2]	test-auc:0.92052	train-auc:0.92336
[3]	test-auc:0.92398	train-auc:0.92647
[4]	test-auc:0.92541	train-auc:0.92847
[5]	test-auc:0.92823	train-auc:0.93135
[I 2022-10-17 04:04:55,516] Trial 80 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.92351	train-auc:0.92924
[1]	test-auc:0.93706	train-auc:0.94469
[2]	test-auc:0.94275	train-auc:0.95109
[3]	test-auc:0.94754	train-auc:0.95674
[4]	test-auc:0.95112	train-auc:0.96140
[5]	test-auc:0.95424	train-auc:0.96529
[6]	test-auc:0.95632	train-auc:0.96827
[7]	test-auc:0.95760	train-auc:0.97055
[8]	test-auc:0.96026	train-auc:0.97358
[9]	test-auc:0.96095	train-auc:0.97484
[I 2022-10-17 04:04:56,454] Trial 81 finished with value: 0.960952 and parameters: {'booster': 'gbtree', 'lambda': 9.554751039192895e-06, 'alpha': 8.261101284549953e-06, 'max_depth': 7, 'eta': 0.9234771515171961, 'gamma': 2.9611099222993125e-05, 'grow_policy': 'depthwise'}. Best is trial 71 with value: 0.962987.
[0]	test-auc:0.92345	train-auc:0.92919
[1]	test-auc:0.93720	train-auc:0.94494
[2]	test-auc:0.94513	train-auc:0.95352
[3]	test-auc:0.94917	train-auc:0.95861
[4]	test-auc:0.95167	train-auc:0.96265
[5]	test-auc:0.95490	train-auc:0.96672
[6]	test-auc:0.95649	train-auc:0.96869
[7]	test-auc:0.95856	train-auc:0.97147
[8]	test-auc:0.96007	train-auc:0.97335
[9]	test-auc:0.96239	train-auc:0.97635
[I 2022-10-17 04:04:57,384] Trial 82 finished with value: 0.962392 and parameters: {'booster': 'gbtree', 'lambda': 9.723789302012496e-06, 'alpha': 5.7222232219253276e-05, 'max_depth': 7, 'eta': 0.9944994491863864, 'gamma': 0.00017763405920659012, 'grow_policy': 'depthwise'}. Best is trial 71 with value: 0.962987.
[0]	test-auc:0.92350	train-auc:0.92924
[1]	test-auc:0.93706	train-auc:0.94469
[2]	test-auc:0.94274	train-auc:0.95110
[3]	test-auc:0.94696	train-auc:0.95648
[4]	test-auc:0.95237	train-auc:0.96268
[5]	test-auc:0.95534	train-auc:0.96657
[6]	test-auc:0.95703	train-auc:0.96898
[7]	test-auc:0.95884	train-auc:0.97132
[8]	test-auc:0.95987	train-auc:0.97296
[9]	test-auc:0.96184	train-auc:0.97584
[I 2022-10-17 04:04:58,309] Trial 83 finished with value: 0.961842 and parameters: {'booster': 'gbtree', 'lambda': 1.1154277633935185e-05, 'alpha': 5.1977548218328895e-05, 'max_depth': 7, 'eta': 0.9245739629819033, 'gamma': 0.0010996457533926994, 'grow_policy': 'depthwise'}. Best is trial 71 with value: 0.962987.
[0]	test-auc:0.92378	train-auc:0.92940
[1]	test-auc:0.93493	train-auc:0.93985
[2]	test-auc:0.94029	train-auc:0.94553
[3]	test-auc:0.94411	train-auc:0.95067
[4]	test-auc:0.94746	train-auc:0.95435
[5]	test-auc:0.94931	train-auc:0.95693
[6]	test-auc:0.95054	train-auc:0.95877
[7]	test-auc:0.95222	train-auc:0.96089
[8]	test-auc:0.95398	train-auc:0.96302
[9]	test-auc:0.95643	train-auc:0.96603
[I 2022-10-17 04:04:59,264] Trial 84 finished with value: 0.956434 and parameters: {'booster': 'gbtree', 'lambda': 1.0320976349993635e-05, 'alpha': 5.369091072695512e-05, 'max_depth': 7, 'eta': 0.4779417548976263, 'gamma': 0.0038383972688043775, 'grow_policy': 'depthwise'}. Best is trial 71 with value: 0.962987.
[0]	test-auc:0.92378	train-auc:0.92940
[1]	test-auc:0.93659	train-auc:0.94158
[2]	test-auc:0.94209	train-auc:0.94761
[3]	test-auc:0.94557	train-auc:0.95198
[4]	test-auc:0.94808	train-auc:0.95480
[5]	test-auc:0.95075	train-auc:0.95772
[6]	test-auc:0.95217	train-auc:0.95997
[7]	test-auc:0.95340	train-auc:0.96172
[8]	test-auc:0.95456	train-auc:0.96316
[9]	test-auc:0.95606	train-auc:0.96525
[I 2022-10-17 04:05:00,226] Trial 85 finished with value: 0.956064 and parameters: {'booster': 'gbtree', 'lambda': 4.707992654750468e-07, 'alpha': 0.00031543338144203426, 'max_depth': 7, 'eta': 0.5077022993678459, 'gamma': 0.001553198285292715, 'grow_policy': 'depthwise'}. Best is trial 71 with value: 0.962987.
[0]	test-auc:0.92367	train-auc:0.92934
[1]	test-auc:0.93251	train-auc:0.93683
[2]	test-auc:0.93651	train-auc:0.94072
[3]	test-auc:0.93978	train-auc:0.94491
[4]	test-auc:0.94252	train-auc:0.94801
[5]	test-auc:0.94501	train-auc:0.95075
[6]	test-auc:0.94646	train-auc:0.95270
[7]	test-auc:0.94824	train-auc:0.95470
[8]	test-auc:0.94880	train-auc:0.95565
[9]	test-auc:0.94985	train-auc:0.95719
[I 2022-10-17 04:05:01,166] Trial 86 finished with value: 0.949845 and parameters: {'booster': 'gbtree', 'lambda': 1.879228654265715e-06, 'alpha': 9.098805770272151e-05, 'max_depth': 7, 'eta': 0.2823238601125813, 'gamma': 7.475975855516763e-05, 'grow_policy': 'depthwise'}. Best is trial 71 with value: 0.962987.
[0]	test-auc:0.91311	train-auc:0.91565
[1]	test-auc:0.91631	train-auc:0.91928
[2]	test-auc:0.91893	train-auc:0.92185
[3]	test-auc:0.92105	train-auc:0.92386
[4]	test-auc:0.92315	train-auc:0.92605
[5]	test-auc:0.92400	train-auc:0.92720
[I 2022-10-17 04:05:01,781] Trial 87 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.92377	train-auc:0.92940
[1]	test-auc:0.93801	train-auc:0.94307
[2]	test-auc:0.94370	train-auc:0.95000
[3]	test-auc:0.94689	train-auc:0.95436
[4]	test-auc:0.94865	train-auc:0.95756
[5]	test-auc:0.95192	train-auc:0.96128
[6]	test-auc:0.95496	train-auc:0.96485
[7]	test-auc:0.95674	train-auc:0.96723
[8]	test-auc:0.95813	train-auc:0.96918
[9]	test-auc:0.95979	train-auc:0.97161
[I 2022-10-17 04:05:02,717] Trial 88 finished with value: 0.959791 and parameters: {'booster': 'gbtree', 'lambda': 6.095281620532628e-07, 'alpha': 1.4719635594117603e-05, 'max_depth': 7, 'eta': 0.6872050672842449, 'gamma': 1.3182553631676556e-05, 'grow_policy': 'depthwise'}. Best is trial 71 with value: 0.962987.
[0]	test-auc:0.92372	train-auc:0.92941
[1]	test-auc:0.93823	train-auc:0.94314
[2]	test-auc:0.94458	train-auc:0.95030
[3]	test-auc:0.94834	train-auc:0.95459
[4]	test-auc:0.95051	train-auc:0.95802
[5]	test-auc:0.95299	train-auc:0.96066
[6]	test-auc:0.95518	train-auc:0.96349
[7]	test-auc:0.95697	train-auc:0.96581
[8]	test-auc:0.95979	train-auc:0.96928
[9]	test-auc:0.96051	train-auc:0.97052
[I 2022-10-17 04:05:03,646] Trial 89 finished with value: 0.960514 and parameters: {'booster': 'gbtree', 'lambda': 5.501592879091811e-08, 'alpha': 1.7263430429942926e-05, 'max_depth': 7, 'eta': 0.6322476699728031, 'gamma': 0.00013289351483954288, 'grow_policy': 'depthwise'}. Best is trial 71 with value: 0.962987.
[0]	test-auc:0.92373	train-auc:0.92941
[1]	test-auc:0.93917	train-auc:0.94374
[2]	test-auc:0.94420	train-auc:0.95044
[3]	test-auc:0.94725	train-auc:0.95415
[4]	test-auc:0.95034	train-auc:0.95796
[5]	test-auc:0.95344	train-auc:0.96142
[6]	test-auc:0.95502	train-auc:0.96389
[7]	test-auc:0.95702	train-auc:0.96592
[8]	test-auc:0.95825	train-auc:0.96763
[9]	test-auc:0.95949	train-auc:0.96974
[I 2022-10-17 04:05:04,559] Trial 90 finished with value: 0.959495 and parameters: {'booster': 'gbtree', 'lambda': 4.215362917710583e-08, 'alpha': 0.00012251736387571747, 'max_depth': 7, 'eta': 0.6504703521885178, 'gamma': 0.00012273117893080445, 'grow_policy': 'depthwise'}. Best is trial 71 with value: 0.962987.
[0]	test-auc:0.92375	train-auc:0.92939
[1]	test-auc:0.93658	train-auc:0.94174
[2]	test-auc:0.94239	train-auc:0.94788
[3]	test-auc:0.94587	train-auc:0.95227
[4]	test-auc:0.94815	train-auc:0.95460
[5]	test-auc:0.95035	train-auc:0.95738
[6]	test-auc:0.95176	train-auc:0.95976
[7]	test-auc:0.95375	train-auc:0.96227
[8]	test-auc:0.95520	train-auc:0.96433
[9]	test-auc:0.95687	train-auc:0.96625
[I 2022-10-17 04:05:05,493] Trial 91 finished with value: 0.956874 and parameters: {'booster': 'gbtree', 'lambda': 4.8545056108330386e-08, 'alpha': 5.427975907771863e-05, 'max_depth': 7, 'eta': 0.520604419266553, 'gamma': 0.00012993976404743906, 'grow_policy': 'depthwise'}. Best is trial 71 with value: 0.962987.
[0]	test-auc:0.92369	train-auc:0.92940
[1]	test-auc:0.93807	train-auc:0.94256
[2]	test-auc:0.94444	train-auc:0.94967
[3]	test-auc:0.94849	train-auc:0.95449
[4]	test-auc:0.95041	train-auc:0.95690
[5]	test-auc:0.95281	train-auc:0.95990
[6]	test-auc:0.95460	train-auc:0.96235
[7]	test-auc:0.95573	train-auc:0.96439
[8]	test-auc:0.95796	train-auc:0.96667
[9]	test-auc:0.95951	train-auc:0.96918
[I 2022-10-17 04:05:06,470] Trial 92 finished with value: 0.959505 and parameters: {'booster': 'gbtree', 'lambda': 1.845441806998115e-07, 'alpha': 0.00016097048493758173, 'max_depth': 7, 'eta': 0.5981511088293853, 'gamma': 0.013499093606930765, 'grow_policy': 'depthwise'}. Best is trial 71 with value: 0.962987.
[0]	test-auc:0.92367	train-auc:0.92934
[1]	test-auc:0.93237	train-auc:0.93709
[2]	test-auc:0.93944	train-auc:0.94335
[3]	test-auc:0.94310	train-auc:0.94750
[4]	test-auc:0.94473	train-auc:0.95017
[5]	test-auc:0.94664	train-auc:0.95237
[6]	test-auc:0.94840	train-auc:0.95460
[7]	test-auc:0.95003	train-auc:0.95663
[8]	test-auc:0.95092	train-auc:0.95798
[9]	test-auc:0.95133	train-auc:0.95898
[I 2022-10-17 04:05:07,427] Trial 93 finished with value: 0.951331 and parameters: {'booster': 'gbtree', 'lambda': 7.307123030383687e-07, 'alpha': 0.000188379804677099, 'max_depth': 7, 'eta': 0.32811724943366183, 'gamma': 0.01413609463800248, 'grow_policy': 'depthwise'}. Best is trial 71 with value: 0.962987.
[0]	test-auc:0.92348	train-auc:0.92921
[1]	test-auc:0.92858	train-auc:0.93359
[2]	test-auc:0.93232	train-auc:0.93662
[3]	test-auc:0.93436	train-auc:0.93886
[4]	test-auc:0.93522	train-auc:0.93987
[5]	test-auc:0.93704	train-auc:0.94175
[I 2022-10-17 04:05:08,047] Trial 94 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.92378	train-auc:0.92941
[1]	test-auc:0.93945	train-auc:0.94411
[2]	test-auc:0.94476	train-auc:0.95075
[3]	test-auc:0.94748	train-auc:0.95434
[4]	test-auc:0.95024	train-auc:0.95786
[5]	test-auc:0.95219	train-auc:0.96090
[6]	test-auc:0.95474	train-auc:0.96411
[7]	test-auc:0.95652	train-auc:0.96683
[8]	test-auc:0.95818	train-auc:0.96919
[9]	test-auc:0.95907	train-auc:0.97043
[I 2022-10-17 04:05:08,977] Trial 95 finished with value: 0.95907 and parameters: {'booster': 'gbtree', 'lambda': 6.928373097904511e-08, 'alpha': 2.0120311376635027e-05, 'max_depth': 7, 'eta': 0.672200549043784, 'gamma': 0.06182745895483645, 'grow_policy': 'depthwise'}. Best is trial 71 with value: 0.962987.
[0]	test-auc:0.92340	train-auc:0.92909
[1]	test-auc:0.92725	train-auc:0.93190
[2]	test-auc:0.92860	train-auc:0.93310
[3]	test-auc:0.93037	train-auc:0.93498
[4]	test-auc:0.93152	train-auc:0.93588
[I 2022-10-17 04:05:09,561] Trial 96 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.91333	train-auc:0.91580
[1]	test-auc:0.92006	train-auc:0.92276
[2]	test-auc:0.92284	train-auc:0.92540
[3]	test-auc:0.92624	train-auc:0.92890
[4]	test-auc:0.92920	train-auc:0.93248
[I 2022-10-17 04:05:10,140] Trial 97 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.79569	train-auc:0.79766
[1]	test-auc:0.80776	train-auc:0.80897
[2]	test-auc:0.82375	train-auc:0.82271
[3]	test-auc:0.83057	train-auc:0.82885
[4]	test-auc:0.83488	train-auc:0.83304
[I 2022-10-17 04:05:10,602] Trial 98 pruned. Trial was pruned at iteration 5.
[0]	test-auc:0.92372	train-auc:0.92937
[1]	test-auc:0.93358	train-auc:0.93776
[2]	test-auc:0.93894	train-auc:0.94422
[3]	test-auc:0.94274	train-auc:0.94850
[4]	test-auc:0.94506	train-auc:0.95153
[5]	test-auc:0.94786	train-auc:0.95453
[6]	test-auc:0.94938	train-auc:0.95614
[7]	test-auc:0.95039	train-auc:0.95777
[8]	test-auc:0.95202	train-auc:0.95948
[9]	test-auc:0.95311	train-auc:0.96132
[I 2022-10-17 04:05:11,587] Trial 99 finished with value: 0.953112 and parameters: {'booster': 'gbtree', 'lambda': 1.9204407580490077e-06, 'alpha': 4.581004319896927e-05, 'max_depth': 7, 'eta': 0.3835203494172305, 'gamma': 0.004413413569472247, 'grow_policy': 'depthwise'}. Best is trial 71 with value: 0.962987.
Best trial:
  Value: 0.962987
  Params: 
    alpha: 9.67597576541745e-06
    booster: gbtree
    eta: 0.562522669088554
    gamma: 1.621994071582901e-05
    grow_policy: depthwise
    lambda: 7.1796112503983475e-06
    max_depth: 8
  Number of estimators: 9
In [60]:
def get_sample_weight(y):
    sw = [len(y)/(len(np.unique(y))*i) for i in np.bincount(y) ]
    return np.array([sw[j] for j in y])




study_name="xgboost"
storage_name = "sqlite:///{}.db".format(study_name)
study = optuna.load_study(study_name=study_name, storage=storage_name)
trial = study.best_trial
best_params = study.best_params
dtrain = xgb.DMatrix(x_train, y_train,get_sample_weight(y_train))
dtest = xgb.DMatrix(x_train1, y_train1, get_sample_weight(y_train1))
dtest_unseen = xgb.DMatrix(x_test1, y_test1,get_sample_weight(y_test1) )
watchlist = [(dtest, 'test'), (dtrain, 'train')]


# watchlist = [(x_train1,y_train1), (x_train,y_train)]
param = {}
param['num_class']=3
param['objective']="multi:softprob"
param['eval_metric']="auc"
param['tree_method']="hist"

best_params['num_class']=3
best_params['objective']="multi:softprob"
best_params['eval_metric']="auc"
best_params['tree_method']="hist"

param['n_estimators']=trial.user_attrs["n_estimators"]
param['reg_alpha']=best_params['alpha']
param['reg_lambda']=best_params['lambda']
param['learning_rate']=best_params['eta']
param['booster']=best_params['booster']
param['grow_policy']=best_params['grow_policy']
param['max_depth']=best_params['max_depth']
param['gamma']=best_params['gamma']

xgbc0 = xgb.train(best_params, dtrain, evals=watchlist,num_boost_round=trial.user_attrs["n_estimators"])
yhat  = xgbc0.predict(dtest_unseen)
ytrainhat = xgbc0.predict(dtrain)
yvalhat = xgbc0.predict(dtest)

# xgbc0 = xgb.XGBClassifier(**param)
# xgbc0.fit(x_train, y_train.to_numpy(),sample_weight=get_sample_weight(y_train),eval_set=watchlist)
# preds = xgbc0.predict_proba(x_test1)
[0]	test-auc:0.93342	train-auc:0.94200
[1]	test-auc:0.94515	train-auc:0.95315
[2]	test-auc:0.95079	train-auc:0.95930
[3]	test-auc:0.95415	train-auc:0.96274
[4]	test-auc:0.95581	train-auc:0.96527
[5]	test-auc:0.95797	train-auc:0.96795
[6]	test-auc:0.95926	train-auc:0.97040
[7]	test-auc:0.96016	train-auc:0.97198
[8]	test-auc:0.96187	train-auc:0.97438
In [79]:
ytrainhat = xgbc0.predict(dtrain)
yvalhat = xgbc0.predict(dtest)
In [129]:
def plot_sigbkg(class_,name):   
#     plt.figure(figsize=(6, 4))
    plt.hist(yhat[:, class_][y_test1==class_],label='test sig',density=True,alpha=0.5)
    plt.hist(yhat[:, class_][y_test1!=class_],label='test bkg',density=True,alpha=0.5)
    plt.hist(ytrainhat[:, class_][y_train==class_],label='train sig',density=True,histtype='step')
    plt.hist(ytrainhat[:, class_][y_train!=class_],label='train bkg',density=True,histtype='step')

    counts,bin_edges = np.histogram(yvalhat[:, class_][y_train1==class_],density=True)
    bin_centers = (bin_edges[:-1] + bin_edges[1:])/2.
    plt.plot(bin_centers, counts,marker="o",linestyle="None",label="val sig")

    counts,bin_edges = np.histogram(yvalhat[:, class_][y_train1!=class_],density=True)
    bin_centers = (bin_edges[:-1] + bin_edges[1:])/2.
    plt.plot(bin_centers, counts,marker="*",linestyle="None",label="val bkg")
    plt.title(name)
    plt.legend()
    plt.tight_layout()
#     plt.show()

    
def plot_roc(class_,name):
#     plt.figure(figsize=(6,4))
    roc1 = roc_curve(1*(y_test1==class_) , yhat[:, class_])
    fpr,tpr,_=roc1
    plt.plot(fpr, tpr, 'b',label=f'test (area = {auc(fpr,tpr)*100:.1f})%')
    roc1 = roc_curve(1*(y_train==class_) , ytrainhat[:, class_])
    fpr,tpr,_=roc1
    plt.plot(fpr, tpr, 'r',label=f'train (area = {auc(fpr,tpr)*100:.1f})%')

    roc1 = roc_curve(1*(y_train1==class_) , yvalhat[:, class_])
    fpr,tpr,_=roc1
    plt.plot(fpr, tpr, 'g',label=f'val (area = {auc(fpr,tpr)*100:.1f})%')

    plt.legend()
    plt.title(name)
#     plt.show()
In [134]:
plt.figure(figsize=(15,15))
plt.subplot(3,2,1)
plot_sigbkg(0,"GALAXY") 
plt.subplot(3,2,2)
plot_roc(0,"GALAXY") 
# plt.show()
plt.subplot(323)
plot_sigbkg(1,"QSO") 
plt.subplot(324)
plot_roc(1,"QSO")  
# plt.show()
plt.subplot(325)
plot_sigbkg(2,"STAR")
plt.subplot(326)
plot_roc(2,"STAR") 
plt.show()
In [154]:
study_name="xgboost"
storage_name = "sqlite:///{}.db".format(study_name)
study = optuna.load_study(study_name=study_name, storage=storage_name)
plot_optimization_history(study).show()
In [155]:
plot_slice(study)
In [157]:
plot_contour(study,params=['alpha','eta'])
In [158]:
plot_contour(study,params=['alpha','lambda'])
In [159]:
plot_contour(study,params=['gamma','lambda'])
In [156]:
plot_parallel_coordinate(study)

Conclusion

I see that using the MSE lost function is much faster than the log likelihood, and so optmizing the MSE version of optmizing technique much faster. I could comfortably test 400 trials for Stochastic decent but training 40 trails for Line search takes more than 100x the time. None of the handmade algrothim are performing nicely, most of them have accuracy of in range of 70% to 75%. Although, I did not test sklearn version of the logistic regression instead xgboost[4] was used which is giving usuable results, ~95% accuracy.

Citation

[1] Abdurro’uf et al., The Seventeenth data release of the Sloan Digital Sky Surveys: Complete Release of MaNGA, MaStar and APOGEE-2 DATA (Abdurro’uf et al. submitted to ApJS) [arXiv:2112.02026] https://www.sdss.org/dr17/
[2] Fedesoriano. (January 2022). Stellar Classification Dataset - SDSS17. Retrieved [15/10/2022] from https://www.kaggle.com/fedesoriano/stellar-classification-dataset-sdss17
[3] WESSAM SALAH WALID (September 2022). Stellar Classification - SDSS17 (4 ML Models). Retrieved [15/10/2022] from https://www.kaggle.com/code/wessamwalid/stellar-classification-sdss17-4-ml-models#Missing-Value-Analysis
[4] Chen, T., & Guestrin, C. (2016). XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 785–794). New York, NY, USA: ACM. https://doi.org/10.1145/2939672.2939785

In [ ]: